Jan 22 11:40:25 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 11:40:25 crc restorecon[4700]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:25 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 11:40:26 crc restorecon[4700]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 11:40:26 crc kubenswrapper[4874]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 11:40:26 crc kubenswrapper[4874]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 11:40:26 crc kubenswrapper[4874]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 11:40:26 crc kubenswrapper[4874]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 11:40:26 crc kubenswrapper[4874]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 11:40:26 crc kubenswrapper[4874]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.501160 4874 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.508522 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.508864 4874 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.509061 4874 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.509217 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.509374 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.509575 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.509716 4874 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.509881 4874 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.510031 4874 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.510169 4874 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.510303 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.510561 4874 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.510729 4874 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.510880 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.511016 4874 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.511163 4874 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.511296 4874 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.511454 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.511582 4874 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.511696 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.511796 4874 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.511896 4874 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.511994 4874 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.512091 4874 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.512189 4874 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.512297 4874 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.512428 4874 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.512536 4874 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.512653 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.512801 4874 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.512939 4874 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.513089 4874 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.513245 4874 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.513390 4874 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.513588 4874 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.513726 4874 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.513857 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.513995 4874 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.514124 4874 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.514255 4874 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.514450 4874 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.514568 4874 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.514675 4874 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.514779 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.514881 4874 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.514995 4874 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.515160 4874 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.515322 4874 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.515482 4874 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.515590 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.515710 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.515813 4874 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.515913 4874 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.516043 4874 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.516172 4874 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.516283 4874 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.516460 4874 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.516591 4874 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.516730 4874 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.516838 4874 feature_gate.go:330] unrecognized feature gate: Example Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.516958 4874 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.517106 4874 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.517248 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.517388 4874 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.517537 4874 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.517647 4874 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.517754 4874 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.517921 4874 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.518043 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.518146 4874 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.518245 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.518607 4874 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.518751 4874 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.518888 4874 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.519007 4874 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.519126 4874 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.519261 4874 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.519427 4874 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.519545 4874 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.519650 4874 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.519753 4874 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.519882 4874 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.520008 4874 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.520470 4874 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.520607 4874 flags.go:64] FLAG: --cgroup-root="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.520714 4874 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.520816 4874 flags.go:64] FLAG: --client-ca-file="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.520918 4874 flags.go:64] FLAG: --cloud-config="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.521024 4874 flags.go:64] FLAG: --cloud-provider="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.521178 4874 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.521316 4874 flags.go:64] FLAG: --cluster-domain="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.521465 4874 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.521580 4874 flags.go:64] FLAG: --config-dir="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.521684 4874 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.521788 4874 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.521895 4874 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.522016 4874 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.522124 4874 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.522244 4874 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.522464 4874 flags.go:64] FLAG: --contention-profiling="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.522624 4874 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.522744 4874 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.522848 4874 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.522978 4874 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523090 4874 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523192 4874 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523305 4874 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523462 4874 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523649 4874 flags.go:64] FLAG: --enable-server="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523804 4874 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523918 4874 flags.go:64] FLAG: --event-burst="100" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523948 4874 flags.go:64] FLAG: --event-qps="50" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523959 4874 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523971 4874 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.523984 4874 flags.go:64] FLAG: --eviction-hard="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524001 4874 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524011 4874 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524021 4874 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524039 4874 flags.go:64] FLAG: --eviction-soft="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524051 4874 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524063 4874 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524077 4874 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524088 4874 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524100 4874 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524109 4874 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524119 4874 flags.go:64] FLAG: --feature-gates="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524131 4874 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524140 4874 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524150 4874 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524159 4874 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524169 4874 flags.go:64] FLAG: --healthz-port="10248" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524179 4874 flags.go:64] FLAG: --help="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524189 4874 flags.go:64] FLAG: --hostname-override="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524197 4874 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524207 4874 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524216 4874 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524226 4874 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524235 4874 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524244 4874 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524253 4874 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524262 4874 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524271 4874 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524280 4874 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524290 4874 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524300 4874 flags.go:64] FLAG: --kube-reserved="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524309 4874 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524319 4874 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524328 4874 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524338 4874 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524347 4874 flags.go:64] FLAG: --lock-file="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524355 4874 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524365 4874 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524375 4874 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524389 4874 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524435 4874 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524448 4874 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524460 4874 flags.go:64] FLAG: --logging-format="text" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524471 4874 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524484 4874 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524495 4874 flags.go:64] FLAG: --manifest-url="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524506 4874 flags.go:64] FLAG: --manifest-url-header="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524523 4874 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524536 4874 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524547 4874 flags.go:64] FLAG: --max-pods="110" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524557 4874 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524567 4874 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524577 4874 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524586 4874 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524596 4874 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524605 4874 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524615 4874 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524637 4874 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524647 4874 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524656 4874 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524665 4874 flags.go:64] FLAG: --pod-cidr="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524674 4874 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524688 4874 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524698 4874 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524707 4874 flags.go:64] FLAG: --pods-per-core="0" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524716 4874 flags.go:64] FLAG: --port="10250" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524726 4874 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524735 4874 flags.go:64] FLAG: --provider-id="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524744 4874 flags.go:64] FLAG: --qos-reserved="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524754 4874 flags.go:64] FLAG: --read-only-port="10255" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524763 4874 flags.go:64] FLAG: --register-node="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524772 4874 flags.go:64] FLAG: --register-schedulable="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524781 4874 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524797 4874 flags.go:64] FLAG: --registry-burst="10" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524806 4874 flags.go:64] FLAG: --registry-qps="5" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524815 4874 flags.go:64] FLAG: --reserved-cpus="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524825 4874 flags.go:64] FLAG: --reserved-memory="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524837 4874 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524846 4874 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524856 4874 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524865 4874 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524874 4874 flags.go:64] FLAG: --runonce="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524884 4874 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524894 4874 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524903 4874 flags.go:64] FLAG: --seccomp-default="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524912 4874 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524921 4874 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524931 4874 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524940 4874 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524950 4874 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524959 4874 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.524991 4874 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525001 4874 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525010 4874 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525020 4874 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525032 4874 flags.go:64] FLAG: --system-cgroups="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525043 4874 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525062 4874 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525073 4874 flags.go:64] FLAG: --tls-cert-file="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525085 4874 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525102 4874 flags.go:64] FLAG: --tls-min-version="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525113 4874 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525124 4874 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525136 4874 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525147 4874 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525159 4874 flags.go:64] FLAG: --v="2" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525173 4874 flags.go:64] FLAG: --version="false" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525185 4874 flags.go:64] FLAG: --vmodule="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525196 4874 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.525205 4874 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525497 4874 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525512 4874 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525522 4874 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525531 4874 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525540 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525551 4874 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525564 4874 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525574 4874 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525583 4874 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525593 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525603 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525613 4874 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525623 4874 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525633 4874 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525642 4874 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525652 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525661 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525670 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525680 4874 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525690 4874 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525700 4874 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525708 4874 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525719 4874 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525728 4874 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525736 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525745 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525754 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525762 4874 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525770 4874 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525778 4874 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525786 4874 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525794 4874 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525801 4874 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525809 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525817 4874 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525824 4874 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525832 4874 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525840 4874 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525849 4874 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525856 4874 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525864 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525872 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525880 4874 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525887 4874 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525895 4874 feature_gate.go:330] unrecognized feature gate: Example Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525903 4874 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525910 4874 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525918 4874 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525925 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525934 4874 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525943 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525950 4874 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525958 4874 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525966 4874 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525973 4874 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525981 4874 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525988 4874 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.525997 4874 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526005 4874 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526012 4874 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526020 4874 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526030 4874 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526041 4874 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526052 4874 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526066 4874 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526078 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526087 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526095 4874 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526104 4874 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526113 4874 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.526122 4874 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.526486 4874 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.538253 4874 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.538308 4874 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538499 4874 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538515 4874 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538524 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538534 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538542 4874 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538550 4874 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538558 4874 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538565 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538574 4874 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538582 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538591 4874 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538599 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538607 4874 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538615 4874 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538623 4874 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538630 4874 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538641 4874 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538654 4874 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538663 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538672 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538680 4874 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538688 4874 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538697 4874 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538706 4874 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538715 4874 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538724 4874 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538733 4874 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538742 4874 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538750 4874 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538759 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538768 4874 feature_gate.go:330] unrecognized feature gate: Example Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538777 4874 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538785 4874 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538793 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538804 4874 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538812 4874 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538821 4874 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538832 4874 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538841 4874 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.538849 4874 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539008 4874 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539016 4874 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539024 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539034 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539042 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539052 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539061 4874 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539071 4874 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539081 4874 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539089 4874 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539099 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539107 4874 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539115 4874 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539123 4874 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539131 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539139 4874 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539147 4874 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539155 4874 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539163 4874 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539171 4874 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539180 4874 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539187 4874 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539195 4874 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539203 4874 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539213 4874 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539222 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539231 4874 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539239 4874 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539246 4874 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539255 4874 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539453 4874 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.539469 4874 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539765 4874 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539786 4874 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539796 4874 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539806 4874 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539816 4874 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539824 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539832 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539840 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539851 4874 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539862 4874 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539871 4874 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539880 4874 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539888 4874 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539896 4874 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539905 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539912 4874 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539921 4874 feature_gate.go:330] unrecognized feature gate: Example Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539929 4874 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539937 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539948 4874 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539957 4874 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539967 4874 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539977 4874 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539985 4874 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.539995 4874 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540003 4874 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540010 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540018 4874 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540027 4874 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540036 4874 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540045 4874 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540053 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540061 4874 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540068 4874 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540077 4874 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540086 4874 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540095 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540103 4874 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540110 4874 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540118 4874 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540127 4874 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540135 4874 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540143 4874 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540154 4874 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540163 4874 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540172 4874 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540179 4874 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540187 4874 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540195 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540203 4874 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540212 4874 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540220 4874 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540228 4874 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540236 4874 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540244 4874 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540251 4874 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540259 4874 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540267 4874 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540275 4874 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540283 4874 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540291 4874 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540299 4874 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540307 4874 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540315 4874 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540323 4874 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540330 4874 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540338 4874 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540346 4874 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540355 4874 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540362 4874 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.540371 4874 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.540385 4874 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.540996 4874 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.546087 4874 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.546210 4874 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.547077 4874 server.go:997] "Starting client certificate rotation" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.547113 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.547564 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-18 20:59:55.229962454 +0000 UTC Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.547718 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.559579 4874 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.561645 4874 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.563557 4874 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.574436 4874 log.go:25] "Validated CRI v1 runtime API" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.605190 4874 log.go:25] "Validated CRI v1 image API" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.607685 4874 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.615072 4874 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-11-35-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.615128 4874 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.643305 4874 manager.go:217] Machine: {Timestamp:2026-01-22 11:40:26.63890127 +0000 UTC m=+0.483972410 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:400a411a-8387-4bfb-bbce-2d30a7ad1d2e BootID:770ef4c0-49b6-4adf-aa62-b643a71c762c Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:42:de:d0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:42:de:d0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e7:52:07 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7f:bf:6b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5c:84:64 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:1f:26:0a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e2:36:1a:a0:57:9b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:96:79:96:9f:55:69 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.643771 4874 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.644071 4874 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.644886 4874 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.645115 4874 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.645163 4874 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.645383 4874 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.645411 4874 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.645652 4874 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.645685 4874 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.645843 4874 state_mem.go:36] "Initialized new in-memory state store" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.645938 4874 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.647704 4874 kubelet.go:418] "Attempting to sync node with API server" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.647743 4874 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.647794 4874 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.647816 4874 kubelet.go:324] "Adding apiserver pod source" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.647835 4874 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.651300 4874 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.652081 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.652182 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.652174 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.652259 4874 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.652274 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.653321 4874 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.653991 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654021 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654031 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654040 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654053 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654061 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654069 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654084 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654094 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654104 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654138 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.654147 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.655550 4874 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.655980 4874 server.go:1280] "Started kubelet" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.656326 4874 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.656635 4874 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 11:40:26 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.657916 4874 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.658816 4874 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.659479 4874 server.go:460] "Adding debug handlers to kubelet server" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.659552 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.659596 4874 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.659615 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:08:42.214282251 +0000 UTC Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.659018 4874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d0abaf15b3618 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:40:26.655954456 +0000 UTC m=+0.501025526,LastTimestamp:2026-01-22 11:40:26.655954456 +0000 UTC m=+0.501025526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.659822 4874 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.660043 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="200ms" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.660624 4874 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.660643 4874 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.660766 4874 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.664455 4874 factory.go:55] Registering systemd factory Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.664824 4874 factory.go:221] Registration of the systemd container factory successfully Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.664582 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.665125 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.665640 4874 factory.go:153] Registering CRI-O factory Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.665673 4874 factory.go:221] Registration of the crio container factory successfully Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.665760 4874 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.665795 4874 factory.go:103] Registering Raw factory Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.665818 4874 manager.go:1196] Started watching for new ooms in manager Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.666810 4874 manager.go:319] Starting recovery of all containers Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673656 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673699 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673712 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673722 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673732 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673741 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673749 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673758 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673769 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673778 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673786 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673795 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673804 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673816 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673826 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673834 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673844 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673876 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673886 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673896 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673906 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673923 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673932 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673941 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673951 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673959 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673970 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.673982 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674004 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674031 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674043 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674054 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674091 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674122 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674134 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674775 4874 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674801 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674816 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674828 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674855 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674868 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674882 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674895 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674908 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674922 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674935 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674948 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674960 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674974 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674986 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.674998 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675011 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675023 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675040 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675054 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675068 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675082 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675093 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675106 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675119 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675132 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675146 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675158 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675172 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675186 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675200 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675215 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675227 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675239 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675251 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675263 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675275 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675287 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675298 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675309 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675322 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675335 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675349 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675362 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675374 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675386 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675415 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675429 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675441 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675453 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675465 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675475 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675486 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675501 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675511 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675619 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675635 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675648 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675657 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675668 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675678 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675688 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675697 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675707 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675716 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675725 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675734 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675744 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675755 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675764 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675781 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675792 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675802 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675813 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675824 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675834 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675843 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675853 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675861 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675899 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675909 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675918 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675927 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675935 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675946 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675954 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675964 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675974 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675985 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.675994 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676003 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676012 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676021 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676031 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676039 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676049 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676059 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676069 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676077 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676088 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676100 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676109 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676120 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676130 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676140 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676151 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676161 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676170 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676180 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676191 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676200 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676210 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676220 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676229 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676240 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676251 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676265 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676276 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676286 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676297 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676307 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676317 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676326 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676335 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676344 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676353 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676363 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676373 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676381 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676392 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676419 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676432 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676448 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676457 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676467 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676476 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676485 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676496 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676517 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676530 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676543 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676556 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676569 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676578 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676587 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676596 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676606 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676618 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676631 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676644 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676655 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676664 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676671 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676681 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676689 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676698 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676709 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676721 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676733 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676744 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676756 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676766 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676776 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676785 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676794 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676803 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676813 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676822 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676831 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676847 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676858 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676871 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676882 4874 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676893 4874 reconstruct.go:97] "Volume reconstruction finished" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.676900 4874 reconciler.go:26] "Reconciler: start to sync state" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.703427 4874 manager.go:324] Recovery completed Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.711900 4874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.714774 4874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.714839 4874 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.714876 4874 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.714956 4874 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.715109 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.716979 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: W0122 11:40:26.716952 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.717014 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.717028 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.717033 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.717966 4874 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.717985 4874 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.718005 4874 state_mem.go:36] "Initialized new in-memory state store" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.729739 4874 policy_none.go:49] "None policy: Start" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.730590 4874 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.730635 4874 state_mem.go:35] "Initializing new in-memory state store" Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.760667 4874 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.790547 4874 manager.go:334] "Starting Device Plugin manager" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.790743 4874 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.790757 4874 server.go:79] "Starting device plugin registration server" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.791156 4874 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.791168 4874 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.792843 4874 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.792948 4874 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.792963 4874 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.802227 4874 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.815422 4874 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.815520 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.816442 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.816495 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.816504 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.816655 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.816789 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.816838 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.817537 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.817537 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.817590 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.817603 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.817569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.817678 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.817779 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.817813 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.817921 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.818548 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.818590 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.818598 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.818728 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.818824 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.818859 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819235 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819253 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819319 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819332 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819407 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819355 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819424 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819872 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819894 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.819904 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.820011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.820050 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.820072 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.820207 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.820230 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.820243 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.820370 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.820417 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.821896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.821920 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.821934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.860605 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="400ms" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.879808 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.879839 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.879854 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.879905 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.879956 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.879985 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.880013 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.880045 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.880077 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.880091 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.880124 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.880160 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.880188 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.880209 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.880234 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.891518 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.892542 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.892576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.892586 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.892609 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 11:40:26 crc kubenswrapper[4874]: E0122 11:40:26.892970 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981468 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981530 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981558 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981606 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981621 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981635 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981682 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981709 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981724 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981726 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981791 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981814 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981742 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981753 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981866 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981865 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981908 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981931 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981934 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981789 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981909 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981955 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981970 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.981991 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.982004 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.982011 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.982031 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.982035 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.982060 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:26 crc kubenswrapper[4874]: I0122 11:40:26.982156 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.093086 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.098298 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.098375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.098390 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.098465 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 11:40:27 crc kubenswrapper[4874]: E0122 11:40:27.100470 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.144128 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.162480 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.167639 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 11:40:27 crc kubenswrapper[4874]: W0122 11:40:27.175205 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-83977d959dbca6dfd6522126305eb40bab16667bd2ec177ae03c909fd33a2d1f WatchSource:0}: Error finding container 83977d959dbca6dfd6522126305eb40bab16667bd2ec177ae03c909fd33a2d1f: Status 404 returned error can't find the container with id 83977d959dbca6dfd6522126305eb40bab16667bd2ec177ae03c909fd33a2d1f Jan 22 11:40:27 crc kubenswrapper[4874]: W0122 11:40:27.185667 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-1aea8224c175d8183911a2292f27d018418a06cbe5d51d3fc276d5847d1d84e5 WatchSource:0}: Error finding container 1aea8224c175d8183911a2292f27d018418a06cbe5d51d3fc276d5847d1d84e5: Status 404 returned error can't find the container with id 1aea8224c175d8183911a2292f27d018418a06cbe5d51d3fc276d5847d1d84e5 Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.195364 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.200308 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:40:27 crc kubenswrapper[4874]: W0122 11:40:27.211579 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a651fd97867f232685589b5afe94f8d15e5575a9ecd28a0cff736f11302df9d1 WatchSource:0}: Error finding container a651fd97867f232685589b5afe94f8d15e5575a9ecd28a0cff736f11302df9d1: Status 404 returned error can't find the container with id a651fd97867f232685589b5afe94f8d15e5575a9ecd28a0cff736f11302df9d1 Jan 22 11:40:27 crc kubenswrapper[4874]: W0122 11:40:27.218364 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-bd5bea8fe7867db666be5d68769277907eacd0ae9d8bb853d7b44c0ff74a37ff WatchSource:0}: Error finding container bd5bea8fe7867db666be5d68769277907eacd0ae9d8bb853d7b44c0ff74a37ff: Status 404 returned error can't find the container with id bd5bea8fe7867db666be5d68769277907eacd0ae9d8bb853d7b44c0ff74a37ff Jan 22 11:40:27 crc kubenswrapper[4874]: E0122 11:40:27.261854 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="800ms" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.501020 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.502384 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.502442 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.502451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.502478 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 11:40:27 crc kubenswrapper[4874]: E0122 11:40:27.502823 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Jan 22 11:40:27 crc kubenswrapper[4874]: W0122 11:40:27.604964 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:27 crc kubenswrapper[4874]: E0122 11:40:27.605062 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.659781 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:36:01.059206616 +0000 UTC Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.660260 4874 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.722834 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1" exitCode=0 Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.722935 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.723148 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"837bc53ae60e9b2b157444f4ed293849fa379eac3ca656cfc2679215eb4fc027"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.723321 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.724427 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.724489 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.724511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.725578 4874 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d" exitCode=0 Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.725673 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.725736 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"83977d959dbca6dfd6522126305eb40bab16667bd2ec177ae03c909fd33a2d1f"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.725842 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.726937 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727311 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727341 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727351 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727523 4874 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832" exitCode=0 Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727551 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727589 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bd5bea8fe7867db666be5d68769277907eacd0ae9d8bb853d7b44c0ff74a37ff"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727667 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727868 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727902 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.727914 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.728313 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.728336 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.728345 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.731107 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.731150 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a651fd97867f232685589b5afe94f8d15e5575a9ecd28a0cff736f11302df9d1"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.732864 4874 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f" exitCode=0 Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.732903 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.732960 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1aea8224c175d8183911a2292f27d018418a06cbe5d51d3fc276d5847d1d84e5"} Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.733240 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.734313 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.734366 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:27 crc kubenswrapper[4874]: I0122 11:40:27.734383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:27 crc kubenswrapper[4874]: W0122 11:40:27.823438 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:27 crc kubenswrapper[4874]: E0122 11:40:27.823538 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:27 crc kubenswrapper[4874]: W0122 11:40:27.889169 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:27 crc kubenswrapper[4874]: E0122 11:40:27.889229 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:27 crc kubenswrapper[4874]: W0122 11:40:27.974966 4874 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:27 crc kubenswrapper[4874]: E0122 11:40:27.975286 4874 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:28 crc kubenswrapper[4874]: E0122 11:40:28.062367 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="1.6s" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.303422 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.305151 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.305183 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.305191 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.305213 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 11:40:28 crc kubenswrapper[4874]: E0122 11:40:28.305606 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.586142 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 11:40:28 crc kubenswrapper[4874]: E0122 11:40:28.586942 4874 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.659829 4874 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.659885 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:26:35.417626993 +0000 UTC Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.736638 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.736678 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.736689 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.736761 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.737374 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.737414 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.737424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.738620 4874 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6" exitCode=0 Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.738674 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.738751 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.739339 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.739367 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.739375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.746022 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.746051 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.746060 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.746068 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.746078 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.746166 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.746875 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.746897 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.746906 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.748263 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"01ca3abbae7b47aa2ee502ed6cc36a325843a9f44e1c7881ba5a142bd13dd1b3"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.748312 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.748780 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.748796 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.748803 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.750379 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.750436 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.750447 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479"} Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.750499 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.750911 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.750928 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:28 crc kubenswrapper[4874]: I0122 11:40:28.750934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.515380 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.660193 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:37:46.962115934 +0000 UTC Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.755111 4874 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90" exitCode=0 Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.755175 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90"} Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.755284 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.755333 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.755595 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.755672 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.756508 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.756553 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.756566 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.756588 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.756574 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.756651 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.756672 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.756713 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.756738 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.907825 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.910054 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.910091 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.910102 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:29 crc kubenswrapper[4874]: I0122 11:40:29.910128 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 11:40:30 crc kubenswrapper[4874]: I0122 11:40:30.661445 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:28:23.646644374 +0000 UTC Jan 22 11:40:30 crc kubenswrapper[4874]: I0122 11:40:30.765074 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4"} Jan 22 11:40:30 crc kubenswrapper[4874]: I0122 11:40:30.765153 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b"} Jan 22 11:40:30 crc kubenswrapper[4874]: I0122 11:40:30.765179 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496"} Jan 22 11:40:30 crc kubenswrapper[4874]: I0122 11:40:30.765196 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572"} Jan 22 11:40:30 crc kubenswrapper[4874]: I0122 11:40:30.765206 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:30 crc kubenswrapper[4874]: I0122 11:40:30.766611 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:30 crc kubenswrapper[4874]: I0122 11:40:30.766666 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:30 crc kubenswrapper[4874]: I0122 11:40:30.766689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.661870 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:57:20.499752039 +0000 UTC Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.732660 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.775318 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e"} Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.775509 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.775551 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.777017 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.777064 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.777084 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.777113 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.777162 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:31 crc kubenswrapper[4874]: I0122 11:40:31.777181 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.090252 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.090515 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.092032 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.092083 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.092102 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.630688 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.663069 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:52:45.044593709 +0000 UTC Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.778550 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.778563 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.779795 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.779842 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.779858 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.781305 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.781370 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.781381 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.855173 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 11:40:32 crc kubenswrapper[4874]: I0122 11:40:32.957502 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:33 crc kubenswrapper[4874]: I0122 11:40:33.663930 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:10:11.964608875 +0000 UTC Jan 22 11:40:33 crc kubenswrapper[4874]: I0122 11:40:33.781595 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:33 crc kubenswrapper[4874]: I0122 11:40:33.783029 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:33 crc kubenswrapper[4874]: I0122 11:40:33.783087 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:33 crc kubenswrapper[4874]: I0122 11:40:33.783101 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.068360 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.068706 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.070236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.070351 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.070569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.664529 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:28:50.431402989 +0000 UTC Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.904118 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.904284 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.905977 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.906051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.906074 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:34 crc kubenswrapper[4874]: I0122 11:40:34.912681 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.090302 4874 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.090437 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.232726 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.233030 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.234896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.234964 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.234988 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.665556 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:16:06.005247069 +0000 UTC Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.787973 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.789569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.789624 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:35 crc kubenswrapper[4874]: I0122 11:40:35.789641 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:36 crc kubenswrapper[4874]: I0122 11:40:36.322768 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 11:40:36 crc kubenswrapper[4874]: I0122 11:40:36.323042 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:36 crc kubenswrapper[4874]: I0122 11:40:36.325015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:36 crc kubenswrapper[4874]: I0122 11:40:36.325098 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:36 crc kubenswrapper[4874]: I0122 11:40:36.325118 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:36 crc kubenswrapper[4874]: I0122 11:40:36.666024 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:11:23.46863161 +0000 UTC Jan 22 11:40:36 crc kubenswrapper[4874]: E0122 11:40:36.802542 4874 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:40:37 crc kubenswrapper[4874]: I0122 11:40:37.666752 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:57:24.335993651 +0000 UTC Jan 22 11:40:38 crc kubenswrapper[4874]: I0122 11:40:38.666928 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:15:56.990131256 +0000 UTC Jan 22 11:40:39 crc kubenswrapper[4874]: I0122 11:40:39.258581 4874 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 11:40:39 crc kubenswrapper[4874]: I0122 11:40:39.258664 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 11:40:39 crc kubenswrapper[4874]: I0122 11:40:39.268757 4874 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 11:40:39 crc kubenswrapper[4874]: I0122 11:40:39.268828 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 11:40:39 crc kubenswrapper[4874]: I0122 11:40:39.521003 4874 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]log ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]etcd ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/priority-and-fairness-filter ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-apiextensions-informers ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-apiextensions-controllers ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/crd-informer-synced ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-system-namespaces-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 22 11:40:39 crc kubenswrapper[4874]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 22 11:40:39 crc kubenswrapper[4874]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/bootstrap-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/start-kube-aggregator-informers ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/apiservice-registration-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/apiservice-discovery-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]autoregister-completion ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/apiservice-openapi-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 22 11:40:39 crc kubenswrapper[4874]: livez check failed Jan 22 11:40:39 crc kubenswrapper[4874]: I0122 11:40:39.521077 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:40:39 crc kubenswrapper[4874]: I0122 11:40:39.667907 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:01:42.479109507 +0000 UTC Jan 22 11:40:40 crc kubenswrapper[4874]: I0122 11:40:40.668001 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:15:36.826225198 +0000 UTC Jan 22 11:40:41 crc kubenswrapper[4874]: I0122 11:40:41.668174 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:36:02.135745213 +0000 UTC Jan 22 11:40:42 crc kubenswrapper[4874]: I0122 11:40:42.634996 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:42 crc kubenswrapper[4874]: I0122 11:40:42.635243 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:42 crc kubenswrapper[4874]: I0122 11:40:42.636634 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:42 crc kubenswrapper[4874]: I0122 11:40:42.636676 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:42 crc kubenswrapper[4874]: I0122 11:40:42.636688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:42 crc kubenswrapper[4874]: I0122 11:40:42.668989 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:44:07.231862002 +0000 UTC Jan 22 11:40:43 crc kubenswrapper[4874]: I0122 11:40:43.669283 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:48:11.218693277 +0000 UTC Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.247095 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.249926 4874 trace.go:236] Trace[328022024]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 11:40:30.444) (total time: 13805ms): Jan 22 11:40:44 crc kubenswrapper[4874]: Trace[328022024]: ---"Objects listed" error: 13805ms (11:40:44.249) Jan 22 11:40:44 crc kubenswrapper[4874]: Trace[328022024]: [13.805551961s] [13.805551961s] END Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.249960 4874 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.250285 4874 trace.go:236] Trace[2096886872]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 11:40:30.159) (total time: 14090ms): Jan 22 11:40:44 crc kubenswrapper[4874]: Trace[2096886872]: ---"Objects listed" error: 14090ms (11:40:44.250) Jan 22 11:40:44 crc kubenswrapper[4874]: Trace[2096886872]: [14.090831046s] [14.090831046s] END Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.250325 4874 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.251215 4874 trace.go:236] Trace[592521690]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 11:40:30.833) (total time: 13417ms): Jan 22 11:40:44 crc kubenswrapper[4874]: Trace[592521690]: ---"Objects listed" error: 13417ms (11:40:44.251) Jan 22 11:40:44 crc kubenswrapper[4874]: Trace[592521690]: [13.41725261s] [13.41725261s] END Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.251249 4874 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.251293 4874 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.251349 4874 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.256947 4874 trace.go:236] Trace[646061860]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 11:40:29.809) (total time: 14447ms): Jan 22 11:40:44 crc kubenswrapper[4874]: Trace[646061860]: ---"Objects listed" error: 14443ms (11:40:44.253) Jan 22 11:40:44 crc kubenswrapper[4874]: Trace[646061860]: [14.447552629s] [14.447552629s] END Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.257008 4874 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.266363 4874 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.290654 4874 csr.go:261] certificate signing request csr-crh9q is approved, waiting to be issued Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.304158 4874 csr.go:257] certificate signing request csr-crh9q is issued Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.459083 4874 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.459138 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.525119 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.526016 4874 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.526062 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.535104 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.658531 4874 apiserver.go:52] "Watching apiserver" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.660761 4874 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.661076 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-prbck","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.661410 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.661558 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.661678 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.661719 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.661836 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.661869 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.662389 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.662489 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.662542 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-prbck" Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.662613 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.663096 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.663673 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.663972 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.664364 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.664836 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.664890 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.665111 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.665183 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.665271 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.665322 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.665617 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.666092 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.669661 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:54:02.467067046 +0000 UTC Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.685383 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.694292 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.705167 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.713894 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.729825 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.737300 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.748247 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.760183 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.761711 4874 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.770876 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.812560 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.814628 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782" exitCode=255 Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.814677 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782"} Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.821445 4874 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.821898 4874 scope.go:117] "RemoveContainer" containerID="ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.828335 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.837551 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.851645 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.855955 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.855989 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856346 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856385 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856459 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856477 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856497 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856511 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856528 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856571 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856586 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856601 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856618 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856639 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856660 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856682 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856701 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856718 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856734 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856752 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856767 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856778 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856783 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856822 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856843 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856859 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856875 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856863 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856892 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856909 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856926 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856941 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856957 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.856973 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857002 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857018 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857034 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857056 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857071 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857087 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857105 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857119 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857139 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857155 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857171 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857185 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857201 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857215 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857229 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857266 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857280 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857303 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857318 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857333 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857348 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857363 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857377 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857405 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857426 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857438 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857448 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857377 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857522 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857571 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857588 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857605 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857650 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857672 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857689 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857705 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857725 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857747 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857770 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857842 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857860 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857877 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857893 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857909 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857926 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857943 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857961 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857977 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857995 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858716 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858737 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858755 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858772 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858788 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858805 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858826 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858856 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858881 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858899 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858920 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858936 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858952 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858968 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858983 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858998 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859014 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859035 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859052 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859068 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859084 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859098 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859115 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859129 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859145 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859161 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859178 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859195 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859211 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859227 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859243 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859259 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859275 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859337 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859355 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859371 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859389 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859419 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859435 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859458 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859483 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859499 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859516 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859537 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859555 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859571 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859586 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859600 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859617 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859631 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859647 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859690 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859708 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859747 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859764 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859782 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859803 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859821 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859839 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859855 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859870 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859889 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859906 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859921 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859939 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859962 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859987 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857815 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.857908 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858197 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.862330 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.862388 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.862435 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.862456 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.862481 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.860918 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.862307 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858328 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858744 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858900 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858931 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859219 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859927 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.859990 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.860001 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.860113 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.860485 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.860860 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.858266 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.862609 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.862881 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.863001 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865491 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.863105 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.863333 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.862760 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.863425 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.863574 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.863728 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.863751 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.864047 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.864278 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865102 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865248 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865390 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865527 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865592 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865819 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865851 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865858 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865951 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.865987 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.866058 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.866145 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.866173 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.866306 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.866411 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.866512 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.866807 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.866811 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.866885 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.867086 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.867324 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.867366 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.867653 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.867889 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868020 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.867438 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868166 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868432 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868450 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868450 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868488 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868724 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868182 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868781 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868798 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.868834 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.869034 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.869600 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.869776 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.870069 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.870143 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.870207 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.870239 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.870460 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.870986 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.871601 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.871720 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.871772 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.872062 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.872232 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.872324 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.870718 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.872468 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.872816 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.872942 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.873242 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.871784 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.871152 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.873312 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.873453 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.873511 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.871003 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.873739 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.874020 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:40:45.373987573 +0000 UTC m=+19.219058663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.874089 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.874121 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.874956 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875177 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875302 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875437 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875549 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875628 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875722 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875812 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875902 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876035 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876082 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876120 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876154 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876185 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876219 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876280 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876669 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876766 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876799 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876831 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875767 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875989 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876858 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.875726 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876496 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876888 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876917 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876996 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.877030 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.877060 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.877094 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.877122 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876885 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876936 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876644 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876631 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876591 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876483 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.877000 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.877031 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.876577 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.860764 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.877979 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.878010 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.878161 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.878265 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.878536 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.878884 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.878884 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.879328 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.879418 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.879769 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.880025 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.882065 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.882529 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.884116 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.884484 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.884907 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.884952 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.884981 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885011 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885032 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885054 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885074 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885096 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885120 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885142 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885166 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885190 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885214 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885236 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885291 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885295 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885325 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5ca785e-1db4-4e08-9ad0-66158728b48a-hosts-file\") pod \"node-resolver-prbck\" (UID: \"f5ca785e-1db4-4e08-9ad0-66158728b48a\") " pod="openshift-dns/node-resolver-prbck" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885361 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7628z\" (UniqueName: \"kubernetes.io/projected/f5ca785e-1db4-4e08-9ad0-66158728b48a-kube-api-access-7628z\") pod \"node-resolver-prbck\" (UID: \"f5ca785e-1db4-4e08-9ad0-66158728b48a\") " pod="openshift-dns/node-resolver-prbck" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885410 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885436 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885469 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885486 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885493 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885577 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885582 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885607 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885619 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885630 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885719 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885765 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885781 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885802 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885835 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885864 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885918 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885954 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.885992 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.886013 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.886030 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.886638 4874 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.886221 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.886373 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.886491 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.886490 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.886659 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.886544 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.887377 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.877171 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.887291 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.887615 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:45.38758915 +0000 UTC m=+19.232660260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.887958 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.888107 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:45.388089406 +0000 UTC m=+19.233160516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.888530 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.888579 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.888612 4874 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.889101 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.889141 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.889161 4874 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.889181 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.889253 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.889273 4874 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.889673 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.889054 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.889976 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890174 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890610 4874 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890647 4874 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890668 4874 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890690 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890709 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890727 4874 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890749 4874 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890774 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890792 4874 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890811 4874 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890832 4874 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890849 4874 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890868 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890887 4874 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890906 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890927 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890946 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890967 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.890986 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891002 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891022 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891041 4874 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891057 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891074 4874 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891091 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891109 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891127 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891143 4874 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891176 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891194 4874 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891213 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891232 4874 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891251 4874 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891802 4874 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891826 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891847 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891865 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891883 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891901 4874 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891919 4874 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891936 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891953 4874 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891972 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891990 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892009 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892026 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892042 4874 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892062 4874 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892081 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892098 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892116 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892134 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892151 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892167 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892183 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892200 4874 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892217 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892234 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892250 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892669 4874 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892691 4874 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892709 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892726 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892743 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892768 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892787 4874 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892819 4874 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892843 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892863 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892882 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892899 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892918 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892936 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892955 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892972 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.892990 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893008 4874 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893025 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893043 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893062 4874 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893084 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893102 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893122 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893140 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893158 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893176 4874 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893193 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893210 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893228 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893246 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893263 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893281 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893300 4874 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893318 4874 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893338 4874 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893356 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893386 4874 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893443 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893461 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893480 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893498 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893518 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893535 4874 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893553 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893570 4874 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893587 4874 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893604 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893623 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893642 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893661 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893679 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893698 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893717 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893735 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893752 4874 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893769 4874 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893786 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893804 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893820 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893840 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893856 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893872 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891411 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.893918 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.894316 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.898370 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.899602 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.899897 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.891605 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.900569 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.901055 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.901441 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.901604 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.901623 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.901642 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.901736 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:45.401697933 +0000 UTC m=+19.246769213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.903452 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.904634 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.905128 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.906084 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.906634 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.907172 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.907885 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.907933 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.907955 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.908332 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.908593 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.908723 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.908914 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.908939 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.909310 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.909629 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-4prkg"] Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.909663 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.909958 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.909986 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pdb2m"] Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.909995 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.910057 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.910223 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.910363 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.910388 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.910421 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.910463 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:44 crc kubenswrapper[4874]: E0122 11:40:44.910504 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:45.410472485 +0000 UTC m=+19.255543755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.910539 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-krrtc"] Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.910585 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.911133 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.911535 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.911937 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.912811 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913159 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913467 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913464 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.880345 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913633 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913633 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913678 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913788 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913910 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913920 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.913935 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.914049 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.920474 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.920680 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.922617 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.920728 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.921549 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.921748 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.933736 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.933974 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.934696 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.935676 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.936249 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.937776 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.938074 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.938328 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.938570 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.938612 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.939736 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.939769 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.942892 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.942970 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.943147 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.944693 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.944875 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.950168 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.951879 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.956565 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.962073 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.964879 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.978044 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.980455 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.988682 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:44 crc kubenswrapper[4874]: W0122 11:40:44.993890 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-37ae074ff359889d0d2a9d1bb3b631b2a01165b5eedee47c39d5f8871ab33af5 WatchSource:0}: Error finding container 37ae074ff359889d0d2a9d1bb3b631b2a01165b5eedee47c39d5f8871ab33af5: Status 404 returned error can't find the container with id 37ae074ff359889d0d2a9d1bb3b631b2a01165b5eedee47c39d5f8871ab33af5 Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.994997 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7c9653f9-cd5b-4b7a-8056-80ae8235d039-rootfs\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995028 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995052 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995071 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-cnibin\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995086 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-conf-dir\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995101 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995117 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-var-lib-kubelet\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995132 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-socket-dir-parent\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995146 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-system-cni-dir\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995161 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-cni-dir\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995175 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-system-cni-dir\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995191 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-run-k8s-cni-cncf-io\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995205 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-run-multus-certs\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995221 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995236 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssgjv\" (UniqueName: \"kubernetes.io/projected/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-kube-api-access-ssgjv\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995255 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-var-lib-cni-multus\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995280 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-run-netns\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995294 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-os-release\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995308 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c9653f9-cd5b-4b7a-8056-80ae8235d039-mcd-auth-proxy-config\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995324 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-cnibin\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995338 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c9653f9-cd5b-4b7a-8056-80ae8235d039-proxy-tls\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995357 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5ca785e-1db4-4e08-9ad0-66158728b48a-hosts-file\") pod \"node-resolver-prbck\" (UID: \"f5ca785e-1db4-4e08-9ad0-66158728b48a\") " pod="openshift-dns/node-resolver-prbck" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995374 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7628z\" (UniqueName: \"kubernetes.io/projected/f5ca785e-1db4-4e08-9ad0-66158728b48a-kube-api-access-7628z\") pod \"node-resolver-prbck\" (UID: \"f5ca785e-1db4-4e08-9ad0-66158728b48a\") " pod="openshift-dns/node-resolver-prbck" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995410 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-var-lib-cni-bin\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995427 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-daemon-config\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995442 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995459 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txncl\" (UniqueName: \"kubernetes.io/projected/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-kube-api-access-txncl\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995482 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-os-release\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995556 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995600 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-cni-binary-copy\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995620 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-etc-kubernetes\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995654 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995667 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmx5h\" (UniqueName: \"kubernetes.io/projected/7c9653f9-cd5b-4b7a-8056-80ae8235d039-kube-api-access-kmx5h\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995684 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5ca785e-1db4-4e08-9ad0-66158728b48a-hosts-file\") pod \"node-resolver-prbck\" (UID: \"f5ca785e-1db4-4e08-9ad0-66158728b48a\") " pod="openshift-dns/node-resolver-prbck" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.995710 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-hostroot\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996023 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996050 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996064 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996076 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996090 4874 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996105 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996115 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996126 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996135 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996145 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996756 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996774 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996785 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996794 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996804 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996815 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996824 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996833 4874 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996843 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996854 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996864 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996873 4874 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996881 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996889 4874 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996897 4874 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996913 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996922 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996930 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996939 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996948 4874 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996956 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996965 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.996973 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997287 4874 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997298 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997306 4874 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997315 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997324 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997336 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997345 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997354 4874 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997362 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997372 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997382 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997438 4874 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997450 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997459 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997469 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997477 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997486 4874 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997495 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997503 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:44 crc kubenswrapper[4874]: I0122 11:40:44.997513 4874 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997522 4874 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997530 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997539 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997547 4874 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997555 4874 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997564 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997572 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997580 4874 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997588 4874 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.997596 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:44.998181 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.009042 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.012804 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7628z\" (UniqueName: \"kubernetes.io/projected/f5ca785e-1db4-4e08-9ad0-66158728b48a-kube-api-access-7628z\") pod \"node-resolver-prbck\" (UID: \"f5ca785e-1db4-4e08-9ad0-66158728b48a\") " pod="openshift-dns/node-resolver-prbck" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.020464 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.036279 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.052674 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.071778 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.082356 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.091250 4874 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.091315 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.091501 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098244 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txncl\" (UniqueName: \"kubernetes.io/projected/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-kube-api-access-txncl\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098297 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-var-lib-cni-bin\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098322 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-daemon-config\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098346 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098374 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-os-release\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098413 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-cni-binary-copy\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098438 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-etc-kubernetes\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098471 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmx5h\" (UniqueName: \"kubernetes.io/projected/7c9653f9-cd5b-4b7a-8056-80ae8235d039-kube-api-access-kmx5h\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098492 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-hostroot\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098514 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7c9653f9-cd5b-4b7a-8056-80ae8235d039-rootfs\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098535 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098558 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098582 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-cnibin\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098604 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-conf-dir\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098627 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-var-lib-kubelet\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098651 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-socket-dir-parent\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098674 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-system-cni-dir\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098693 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-cni-dir\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098714 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-system-cni-dir\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098735 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-run-k8s-cni-cncf-io\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098757 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-run-multus-certs\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098779 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssgjv\" (UniqueName: \"kubernetes.io/projected/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-kube-api-access-ssgjv\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098813 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-run-netns\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098835 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-var-lib-cni-multus\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098858 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c9653f9-cd5b-4b7a-8056-80ae8235d039-mcd-auth-proxy-config\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098882 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-cnibin\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098901 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-os-release\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.098924 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c9653f9-cd5b-4b7a-8056-80ae8235d039-proxy-tls\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.099319 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-cni-binary-copy\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.099619 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-var-lib-cni-bin\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.099985 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-system-cni-dir\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.099987 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-os-release\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100150 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-conf-dir\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100189 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-var-lib-kubelet\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100241 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-cnibin\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100283 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-run-netns\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100616 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-cni-dir\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100683 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-hostroot\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100625 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-socket-dir-parent\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100684 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-var-lib-cni-multus\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100617 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100728 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-system-cni-dir\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100756 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-run-k8s-cni-cncf-io\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100769 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7c9653f9-cd5b-4b7a-8056-80ae8235d039-rootfs\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100776 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-cnibin\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100795 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-os-release\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100846 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-host-run-multus-certs\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.100864 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-etc-kubernetes\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.101273 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.101387 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.101809 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-multus-daemon-config\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.102195 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c9653f9-cd5b-4b7a-8056-80ae8235d039-mcd-auth-proxy-config\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.103504 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c9653f9-cd5b-4b7a-8056-80ae8235d039-proxy-tls\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.116927 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmx5h\" (UniqueName: \"kubernetes.io/projected/7c9653f9-cd5b-4b7a-8056-80ae8235d039-kube-api-access-kmx5h\") pod \"machine-config-daemon-4prkg\" (UID: \"7c9653f9-cd5b-4b7a-8056-80ae8235d039\") " pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.117902 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssgjv\" (UniqueName: \"kubernetes.io/projected/977746b5-ac1b-4b6e-bdbc-ddd90225e68c-kube-api-access-ssgjv\") pod \"multus-krrtc\" (UID: \"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\") " pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.119936 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txncl\" (UniqueName: \"kubernetes.io/projected/0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4-kube-api-access-txncl\") pod \"multus-additional-cni-plugins-pdb2m\" (UID: \"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\") " pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.243315 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.251918 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-krrtc" Jan 22 11:40:45 crc kubenswrapper[4874]: W0122 11:40:45.257431 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c9653f9_cd5b_4b7a_8056_80ae8235d039.slice/crio-3ba93b22c47310b71d0d933debf3f1ccffae2c2dd7f4d7bbbbad4f798ce7c1a0 WatchSource:0}: Error finding container 3ba93b22c47310b71d0d933debf3f1ccffae2c2dd7f4d7bbbbad4f798ce7c1a0: Status 404 returned error can't find the container with id 3ba93b22c47310b71d0d933debf3f1ccffae2c2dd7f4d7bbbbad4f798ce7c1a0 Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.262125 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.265116 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tmll"] Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.266226 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: W0122 11:40:45.266713 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod977746b5_ac1b_4b6e_bdbc_ddd90225e68c.slice/crio-15e7294d175437ddde2823bdef9219465ecc4f5bc18338a73afd1404b3637ca7 WatchSource:0}: Error finding container 15e7294d175437ddde2823bdef9219465ecc4f5bc18338a73afd1404b3637ca7: Status 404 returned error can't find the container with id 15e7294d175437ddde2823bdef9219465ecc4f5bc18338a73afd1404b3637ca7 Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.270901 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.270910 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.270968 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.270924 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.271061 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.271105 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.271374 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.273651 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 11:40:45 crc kubenswrapper[4874]: W0122 11:40:45.280008 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dbaa8a2_585e_4ab7_a8f5_7b224ad8c5c4.slice/crio-cfd85d40370beef355d62618552b93b7c8b9c6377a7e2284aad1ac33a17abc09 WatchSource:0}: Error finding container cfd85d40370beef355d62618552b93b7c8b9c6377a7e2284aad1ac33a17abc09: Status 404 returned error can't find the container with id cfd85d40370beef355d62618552b93b7c8b9c6377a7e2284aad1ac33a17abc09 Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.288035 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.292332 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-prbck" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.299974 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-slash\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300010 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-openvswitch\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300028 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovn-node-metrics-cert\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300047 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-netns\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300140 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-netd\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300183 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-etc-openvswitch\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300221 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-ovn\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300244 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-script-lib\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300272 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-kubelet\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300286 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-systemd-units\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300301 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-systemd\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300323 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-node-log\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300345 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-config\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300370 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9lv4\" (UniqueName: \"kubernetes.io/projected/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-kube-api-access-t9lv4\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300443 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-bin\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300527 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-log-socket\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300551 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300592 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-var-lib-openvswitch\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300628 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.300650 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-env-overrides\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.306045 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 11:35:44 +0000 UTC, rotation deadline is 2026-10-10 20:37:33.202021591 +0000 UTC Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.306117 4874 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6272h56m47.895908153s for next certificate rotation Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.307242 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: W0122 11:40:45.316165 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-6083a98e963c10d34216819fff1fe8e431382ea80b86b0b07e7683f96ecfcf58 WatchSource:0}: Error finding container 6083a98e963c10d34216819fff1fe8e431382ea80b86b0b07e7683f96ecfcf58: Status 404 returned error can't find the container with id 6083a98e963c10d34216819fff1fe8e431382ea80b86b0b07e7683f96ecfcf58 Jan 22 11:40:45 crc kubenswrapper[4874]: W0122 11:40:45.319952 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a4229c969216bf4a1eeb7c04ec3d2e7ee8178a7f38ade5781ca125e141ed5cff WatchSource:0}: Error finding container a4229c969216bf4a1eeb7c04ec3d2e7ee8178a7f38ade5781ca125e141ed5cff: Status 404 returned error can't find the container with id a4229c969216bf4a1eeb7c04ec3d2e7ee8178a7f38ade5781ca125e141ed5cff Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.323275 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.347843 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.365506 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.380808 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.394904 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.401633 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.401740 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.401772 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-log-socket\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.401795 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.401915 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402013 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.401913 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-log-socket\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.401916 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:40:46.401896843 +0000 UTC m=+20.246967913 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402072 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402099 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-var-lib-openvswitch\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402123 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402144 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402162 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-env-overrides\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402182 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-slash\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402200 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-openvswitch\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402221 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovn-node-metrics-cert\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402243 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-netns\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402265 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-netd\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402283 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-etc-openvswitch\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.402303 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.402322 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402327 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-ovn\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.402338 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402360 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.402372 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:46.402360418 +0000 UTC m=+20.247431488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.402431 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.402462 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:46.40245188 +0000 UTC m=+20.247522950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.402475 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:46.402468961 +0000 UTC m=+20.247540031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402501 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-var-lib-openvswitch\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402303 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-ovn\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402562 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-script-lib\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402596 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-kubelet\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402614 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-systemd-units\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402637 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-systemd\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-node-log\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402684 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-config\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402703 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9lv4\" (UniqueName: \"kubernetes.io/projected/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-kube-api-access-t9lv4\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402733 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-bin\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402819 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-bin\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402916 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-env-overrides\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402961 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-netns\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402964 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-systemd\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.402998 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-netd\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.403001 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-etc-openvswitch\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.403032 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-kubelet\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.403038 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-systemd-units\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.403085 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-node-log\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.403119 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-slash\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.403494 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-openvswitch\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.403609 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-script-lib\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.408688 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-config\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.413388 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.417780 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovn-node-metrics-cert\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.424119 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9lv4\" (UniqueName: \"kubernetes.io/projected/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-kube-api-access-t9lv4\") pod \"ovnkube-node-6tmll\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.430615 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.443003 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.458945 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.473780 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.491309 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.504238 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.505523 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.505558 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.505571 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:45 crc kubenswrapper[4874]: E0122 11:40:45.505624 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:46.505609753 +0000 UTC m=+20.350680823 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.578383 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.669845 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:08:42.838200569 +0000 UTC Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.818054 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-prbck" event={"ID":"f5ca785e-1db4-4e08-9ad0-66158728b48a","Type":"ContainerStarted","Data":"aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.818348 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-prbck" event={"ID":"f5ca785e-1db4-4e08-9ad0-66158728b48a","Type":"ContainerStarted","Data":"5ee10dc6fb3000354d81b9f4fbbcb513ac5ba37ac6db6b8ed3f180cb62d9e8ea"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.818995 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krrtc" event={"ID":"977746b5-ac1b-4b6e-bdbc-ddd90225e68c","Type":"ContainerStarted","Data":"600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.819021 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krrtc" event={"ID":"977746b5-ac1b-4b6e-bdbc-ddd90225e68c","Type":"ContainerStarted","Data":"15e7294d175437ddde2823bdef9219465ecc4f5bc18338a73afd1404b3637ca7"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.819752 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a4229c969216bf4a1eeb7c04ec3d2e7ee8178a7f38ade5781ca125e141ed5cff"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.821273 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.822621 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.822867 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.823623 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.823697 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6083a98e963c10d34216819fff1fe8e431382ea80b86b0b07e7683f96ecfcf58"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.824826 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.824854 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.824869 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"37ae074ff359889d0d2a9d1bb3b631b2a01165b5eedee47c39d5f8871ab33af5"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.826274 4874 generic.go:334] "Generic (PLEG): container finished" podID="0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4" containerID="b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a" exitCode=0 Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.826337 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" event={"ID":"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4","Type":"ContainerDied","Data":"b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.826356 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" event={"ID":"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4","Type":"ContainerStarted","Data":"cfd85d40370beef355d62618552b93b7c8b9c6377a7e2284aad1ac33a17abc09"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.827889 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.827918 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.827932 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"3ba93b22c47310b71d0d933debf3f1ccffae2c2dd7f4d7bbbbad4f798ce7c1a0"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.831903 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f" exitCode=0 Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.831953 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.831980 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"1a325212afdff76674e62ed80b3cf828c221bb977e244657ca644f4020d804af"} Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.835858 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.846726 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.859077 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.872154 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.883869 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.896843 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.910741 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.925045 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.940371 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.960753 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.975957 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.987201 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:45 crc kubenswrapper[4874]: I0122 11:40:45.997756 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:45Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.015248 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.026026 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.038686 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.052036 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.068320 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.081486 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.147872 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.180222 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.200427 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.214562 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.229946 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.350111 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.362556 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.364083 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.378164 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.395051 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.413060 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.413169 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.413254 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:40:48.413225173 +0000 UTC m=+22.258296243 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.413268 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.413284 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.413294 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.413368 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:48.413353347 +0000 UTC m=+22.258424417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.413443 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.413469 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.413512 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.413533 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:48.413526302 +0000 UTC m=+22.258597372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.413614 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.413681 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:48.413669197 +0000 UTC m=+22.258740317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.415989 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.428474 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.441155 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.455631 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.472338 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.490526 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.506217 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.514041 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.514254 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.514448 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.514464 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.514522 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:48.514504374 +0000 UTC m=+22.359575514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.523690 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.547717 4874 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.548973 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-4ln5h/status\": read tcp 38.102.83.153:52796->38.102.83.153:6443: use of closed network connection" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.588490 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.636827 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.669192 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.670157 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:24:22.283485033 +0000 UTC Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.710106 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.715238 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.715242 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.715429 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.715252 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.715683 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:40:46 crc kubenswrapper[4874]: E0122 11:40:46.715497 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.719932 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.720801 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.721670 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.722367 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.723061 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.724716 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.725504 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.726589 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.727437 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.727980 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.728905 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.729642 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.730635 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.731169 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.732390 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.732895 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.734325 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.734783 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.735309 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.736899 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.737444 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.737972 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.740088 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.740876 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.741756 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.742347 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.743532 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.743990 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.745015 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.745550 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.745990 4874 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.746082 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.748053 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.748791 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.749363 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.749921 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.751347 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.752022 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.753041 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.753826 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.754836 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.755308 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.756240 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.756910 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.757937 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.758367 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.759370 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.759914 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.761120 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.761747 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.762841 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.763364 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.763990 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.765174 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.765757 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.793671 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.829594 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.836590 4874 generic.go:334] "Generic (PLEG): container finished" podID="0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4" containerID="d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a" exitCode=0 Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.836662 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" event={"ID":"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4","Type":"ContainerDied","Data":"d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a"} Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.840307 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.840359 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.840370 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.840383 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.874167 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.908831 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.948627 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:46 crc kubenswrapper[4874]: I0122 11:40:46.990432 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.030850 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.068646 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.120525 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.151513 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.192082 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.241144 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.273152 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.331427 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.365425 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.390568 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.429749 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.451725 4874 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.453354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.453441 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.453455 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.453568 4874 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.467122 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.521809 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-q2rnk"] Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.522032 4874 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.522206 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.522312 4874 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.523285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.523316 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.523323 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.523335 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.523344 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:47Z","lastTransitionTime":"2026-01-22T11:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:47 crc kubenswrapper[4874]: E0122 11:40:47.539688 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.543285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.543329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.543339 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.543358 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.543368 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:47Z","lastTransitionTime":"2026-01-22T11:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.547654 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: E0122 11:40:47.554608 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.560783 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.560834 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.560850 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.560869 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.560834 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.560881 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:47Z","lastTransitionTime":"2026-01-22T11:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:47 crc kubenswrapper[4874]: E0122 11:40:47.573091 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.576435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.576463 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.576472 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.576487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.576496 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:47Z","lastTransitionTime":"2026-01-22T11:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.580844 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 11:40:47 crc kubenswrapper[4874]: E0122 11:40:47.589198 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.592234 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.592278 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.592290 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.592309 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.592321 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:47Z","lastTransitionTime":"2026-01-22T11:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.600613 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 11:40:47 crc kubenswrapper[4874]: E0122 11:40:47.606561 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: E0122 11:40:47.606675 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.608884 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.608930 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.608945 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.608969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.608982 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:47Z","lastTransitionTime":"2026-01-22T11:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.621251 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.623789 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/561c457d-4767-4b66-a07a-c435b7c9f161-host\") pod \"node-ca-q2rnk\" (UID: \"561c457d-4767-4b66-a07a-c435b7c9f161\") " pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.623817 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/561c457d-4767-4b66-a07a-c435b7c9f161-serviceca\") pod \"node-ca-q2rnk\" (UID: \"561c457d-4767-4b66-a07a-c435b7c9f161\") " pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.623836 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6zq\" (UniqueName: \"kubernetes.io/projected/561c457d-4767-4b66-a07a-c435b7c9f161-kube-api-access-dn6zq\") pod \"node-ca-q2rnk\" (UID: \"561c457d-4767-4b66-a07a-c435b7c9f161\") " pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.671150 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:22:24.811162877 +0000 UTC Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.672712 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.711823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.711870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.711883 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.711901 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.711912 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:47Z","lastTransitionTime":"2026-01-22T11:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.716689 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.724530 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn6zq\" (UniqueName: \"kubernetes.io/projected/561c457d-4767-4b66-a07a-c435b7c9f161-kube-api-access-dn6zq\") pod \"node-ca-q2rnk\" (UID: \"561c457d-4767-4b66-a07a-c435b7c9f161\") " pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.724581 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/561c457d-4767-4b66-a07a-c435b7c9f161-host\") pod \"node-ca-q2rnk\" (UID: \"561c457d-4767-4b66-a07a-c435b7c9f161\") " pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.724604 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/561c457d-4767-4b66-a07a-c435b7c9f161-serviceca\") pod \"node-ca-q2rnk\" (UID: \"561c457d-4767-4b66-a07a-c435b7c9f161\") " pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.724754 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/561c457d-4767-4b66-a07a-c435b7c9f161-host\") pod \"node-ca-q2rnk\" (UID: \"561c457d-4767-4b66-a07a-c435b7c9f161\") " pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.725723 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/561c457d-4767-4b66-a07a-c435b7c9f161-serviceca\") pod \"node-ca-q2rnk\" (UID: \"561c457d-4767-4b66-a07a-c435b7c9f161\") " pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.753754 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.788924 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn6zq\" (UniqueName: \"kubernetes.io/projected/561c457d-4767-4b66-a07a-c435b7c9f161-kube-api-access-dn6zq\") pod \"node-ca-q2rnk\" (UID: \"561c457d-4767-4b66-a07a-c435b7c9f161\") " pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.810890 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.815599 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.815641 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.815651 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.815670 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.815681 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:47Z","lastTransitionTime":"2026-01-22T11:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.833542 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q2rnk" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.845961 4874 generic.go:334] "Generic (PLEG): container finished" podID="0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4" containerID="a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14" exitCode=0 Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.846050 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" event={"ID":"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4","Type":"ContainerDied","Data":"a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14"} Jan 22 11:40:47 crc kubenswrapper[4874]: W0122 11:40:47.849240 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod561c457d_4767_4b66_a07a_c435b7c9f161.slice/crio-da7d48bcb9a2d643d27a44577a44c56b3bb28ef9b367b7558dc1ebdf9a12f7c8 WatchSource:0}: Error finding container da7d48bcb9a2d643d27a44577a44c56b3bb28ef9b367b7558dc1ebdf9a12f7c8: Status 404 returned error can't find the container with id da7d48bcb9a2d643d27a44577a44c56b3bb28ef9b367b7558dc1ebdf9a12f7c8 Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.851527 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.855127 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.855172 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.898752 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.919626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.919688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.919699 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.919717 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.919729 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:47Z","lastTransitionTime":"2026-01-22T11:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.932173 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:47 crc kubenswrapper[4874]: I0122 11:40:47.971252 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.010611 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.021816 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.021849 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.021860 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.021872 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.021881 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.049661 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.090518 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.124193 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.124235 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.124245 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.124258 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.124269 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.130697 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.171038 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.217261 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.227049 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.227083 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.227095 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.227112 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.227125 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.258087 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.300936 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.330027 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.330071 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.330084 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.330131 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.330148 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.330337 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.373504 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.416163 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.430878 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.430987 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:40:52.430969746 +0000 UTC m=+26.276040826 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.431044 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.431072 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.431094 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.431190 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.431201 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.431218 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.431238 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.431249 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.431251 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:52.431233844 +0000 UTC m=+26.276304934 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.431285 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:52.431262745 +0000 UTC m=+26.276333825 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.431307 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:52.431297497 +0000 UTC m=+26.276368577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.432871 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.432907 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.432919 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.432936 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.432947 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.449105 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.492534 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.532131 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.532330 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.532352 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.532374 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.532519 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:52.532496485 +0000 UTC m=+26.377567565 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.535479 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.537353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.537471 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.537502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.537533 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.537555 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.583075 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.611246 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.640638 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.640672 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.640681 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.640695 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.640706 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.651530 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.671645 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:52:34.192009612 +0000 UTC Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.690988 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.715347 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.715550 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.715632 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.715672 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.715801 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:40:48 crc kubenswrapper[4874]: E0122 11:40:48.715942 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.741638 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.743214 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.743244 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.743258 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.743274 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.743286 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.767892 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.810311 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.846181 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.846269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.846293 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.846325 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.846349 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.859344 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.862731 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.865672 4874 generic.go:334] "Generic (PLEG): container finished" podID="0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4" containerID="6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63" exitCode=0 Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.865730 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" event={"ID":"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4","Type":"ContainerDied","Data":"6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.866845 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q2rnk" event={"ID":"561c457d-4767-4b66-a07a-c435b7c9f161","Type":"ContainerStarted","Data":"a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.866869 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q2rnk" event={"ID":"561c457d-4767-4b66-a07a-c435b7c9f161","Type":"ContainerStarted","Data":"da7d48bcb9a2d643d27a44577a44c56b3bb28ef9b367b7558dc1ebdf9a12f7c8"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.887907 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.929812 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.948385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.948435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.948447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.948460 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.948469 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:48Z","lastTransitionTime":"2026-01-22T11:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:48 crc kubenswrapper[4874]: I0122 11:40:48.969942 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:48Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.015731 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.049831 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.051202 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.051268 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.051284 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.051304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.051318 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.088282 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.128441 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.153635 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.153674 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.153685 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.153701 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.153712 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.168946 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.212353 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.249802 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.255273 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.255304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.255313 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.255325 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.255334 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.290024 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.337930 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.358141 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.358176 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.358187 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.358209 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.358236 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.376370 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.409244 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.453553 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.460445 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.460495 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.460509 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.460527 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.460539 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.563360 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.563414 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.563429 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.563446 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.563457 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.666335 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.666429 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.666449 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.666476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.666493 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.671871 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:28:03.143420805 +0000 UTC Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.768269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.768308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.768318 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.768334 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.768347 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.871037 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.871067 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.871075 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.871087 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.871097 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.873213 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.875918 4874 generic.go:334] "Generic (PLEG): container finished" podID="0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4" containerID="cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7" exitCode=0 Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.876272 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" event={"ID":"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4","Type":"ContainerDied","Data":"cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.891279 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.906086 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.928165 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.943740 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.955147 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.965899 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.976144 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.976189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.976201 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.976218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.976232 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:49Z","lastTransitionTime":"2026-01-22T11:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.980630 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:49 crc kubenswrapper[4874]: I0122 11:40:49.995939 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.005956 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.014919 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.027537 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.040437 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.057578 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.068518 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.078219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.078329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.078445 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.078534 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.078624 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:50Z","lastTransitionTime":"2026-01-22T11:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.180491 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.180532 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.180542 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.180557 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.180568 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:50Z","lastTransitionTime":"2026-01-22T11:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.283660 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.283697 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.283709 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.283726 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.283738 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:50Z","lastTransitionTime":"2026-01-22T11:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.386281 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.386317 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.386326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.386341 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.386350 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:50Z","lastTransitionTime":"2026-01-22T11:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.489177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.489206 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.489216 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.489229 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.489239 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:50Z","lastTransitionTime":"2026-01-22T11:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.592653 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.592705 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.592717 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.592737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.592748 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:50Z","lastTransitionTime":"2026-01-22T11:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.673008 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:13:06.021989114 +0000 UTC Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.695047 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.695093 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.695105 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.695124 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.695137 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:50Z","lastTransitionTime":"2026-01-22T11:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.715541 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.715559 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:50 crc kubenswrapper[4874]: E0122 11:40:50.715758 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:40:50 crc kubenswrapper[4874]: E0122 11:40:50.715855 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.715573 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:50 crc kubenswrapper[4874]: E0122 11:40:50.715961 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.797649 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.797714 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.797729 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.797753 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.797769 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:50Z","lastTransitionTime":"2026-01-22T11:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.882642 4874 generic.go:334] "Generic (PLEG): container finished" podID="0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4" containerID="617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94" exitCode=0 Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.882681 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" event={"ID":"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4","Type":"ContainerDied","Data":"617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.900226 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.900261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.900269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.900284 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.900293 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:50Z","lastTransitionTime":"2026-01-22T11:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.902026 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.917270 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.937747 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.951963 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.969184 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:50 crc kubenswrapper[4874]: I0122 11:40:50.988083 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.000643 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:50Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.002477 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.002511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.002521 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.002544 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.002561 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.012828 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.026325 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.040218 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.051171 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.060989 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.073127 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.092986 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.104744 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.104792 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.104803 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.104821 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.104831 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.211030 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.211100 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.211184 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.211223 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.211237 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.315377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.315730 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.315741 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.315757 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.315766 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.417851 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.417889 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.417900 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.417916 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.417926 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.520256 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.520280 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.520288 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.520300 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.520309 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.622742 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.622792 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.622804 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.622822 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.622834 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.673783 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:51:13.144057122 +0000 UTC Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.725201 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.725244 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.725253 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.725270 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.725280 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.827829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.827879 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.827891 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.827908 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.827920 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.889172 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.889413 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.889477 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.894814 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" event={"ID":"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4","Type":"ContainerStarted","Data":"b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.904565 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.911647 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.913349 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.917554 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.929823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.929852 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.929862 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.929876 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.929887 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:51Z","lastTransitionTime":"2026-01-22T11:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.933982 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.944044 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.954302 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.966574 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.975546 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:51 crc kubenswrapper[4874]: I0122 11:40:51.990737 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.001304 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:51Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.010309 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.020913 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.031892 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.031932 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.031949 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.031974 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.032011 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.033889 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.051438 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.062768 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.076826 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.094489 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.098422 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.100728 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.103194 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.113532 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.126455 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.135246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.135285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.135293 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.135310 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.135320 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.140212 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.158512 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.170631 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.181767 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.193695 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.201970 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.212082 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.221338 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.229734 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.238310 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.238345 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.238354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.238371 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.238381 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.239627 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.250801 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.261283 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.273586 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.282501 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.292462 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.303649 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.313722 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.323252 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.338013 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.340668 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.340700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.340710 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.340725 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.340735 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.358820 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.369585 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.381132 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.392759 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.402667 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.417985 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:52Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.442756 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.442794 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.442823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.442838 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.442850 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.471378 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.471537 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.471585 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.471625 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:41:00.471590765 +0000 UTC m=+34.316661845 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.471653 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.471678 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.471690 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.471710 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:00.471694539 +0000 UTC m=+34.316765659 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.471728 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:00.471718959 +0000 UTC m=+34.316790149 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.471960 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.472021 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.472048 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.472165 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:00.472132523 +0000 UTC m=+34.317203643 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.508310 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.545159 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.545195 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.545204 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.545216 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.545224 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.573077 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.573437 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.573494 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.573521 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.573612 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:00.573589641 +0000 UTC m=+34.418660751 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.648177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.648226 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.648236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.648251 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.648262 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.674097 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:10:28.206002189 +0000 UTC Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.715547 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.715626 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.715566 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.715755 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.715684 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.715845 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.750512 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.750557 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.750567 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.750580 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.750590 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.852975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.853023 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.853031 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.853046 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.853055 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:52 crc kubenswrapper[4874]: E0122 11:40:52.904078 4874 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.956324 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.956371 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.956387 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.956437 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:52 crc kubenswrapper[4874]: I0122 11:40:52.956461 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:52Z","lastTransitionTime":"2026-01-22T11:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.059573 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.059623 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.059634 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.059650 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.059662 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.163037 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.163090 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.163102 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.163120 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.163133 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.265457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.265498 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.265512 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.265528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.265538 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.368561 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.368604 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.368617 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.368633 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.368646 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.474841 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.474883 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.474892 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.474913 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.474930 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.577657 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.577697 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.577707 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.577725 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.577738 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.674311 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:20:46.564312986 +0000 UTC Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.681619 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.681690 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.681706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.681724 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.681754 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.785028 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.785082 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.785096 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.785116 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.785131 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.887660 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.887703 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.887711 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.887725 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.887735 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.989723 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.989777 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.989789 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.989804 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:53 crc kubenswrapper[4874]: I0122 11:40:53.990126 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:53Z","lastTransitionTime":"2026-01-22T11:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.093457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.093498 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.093510 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.093527 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.093538 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:54Z","lastTransitionTime":"2026-01-22T11:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.196045 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.196086 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.196095 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.196109 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.196119 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:54Z","lastTransitionTime":"2026-01-22T11:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.299892 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.299946 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.299956 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.299987 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.299999 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:54Z","lastTransitionTime":"2026-01-22T11:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.402712 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.402767 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.402779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.402800 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.402815 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:54Z","lastTransitionTime":"2026-01-22T11:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.505490 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.505545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.505557 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.505576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.505589 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:54Z","lastTransitionTime":"2026-01-22T11:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.608500 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.608551 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.608569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.608592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.608608 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:54Z","lastTransitionTime":"2026-01-22T11:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.674547 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:32:47.479058154 +0000 UTC Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.712019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.712088 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.712106 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.712130 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.712147 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:54Z","lastTransitionTime":"2026-01-22T11:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.715177 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.715354 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:54 crc kubenswrapper[4874]: E0122 11:40:54.715369 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.715418 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:54 crc kubenswrapper[4874]: E0122 11:40:54.715489 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:40:54 crc kubenswrapper[4874]: E0122 11:40:54.715572 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.814741 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.814789 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.814802 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.814822 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.814837 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:54Z","lastTransitionTime":"2026-01-22T11:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.905555 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/0.log" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.909050 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8" exitCode=1 Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.909100 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.909939 4874 scope.go:117] "RemoveContainer" containerID="72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.917223 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.917252 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.917261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.917276 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.917286 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:54Z","lastTransitionTime":"2026-01-22T11:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.920566 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:54Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.929807 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:54Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.942105 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:54Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.951184 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:54Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.961254 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:54Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:54 crc kubenswrapper[4874]: I0122 11:40:54.979500 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:54Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.004731 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.018371 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.020053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.020086 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.020095 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.020110 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.020121 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.033210 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.043716 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.063523 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:54Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298122 6154 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298314 6154 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 11:40:54.299178 6154 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:40:54.299220 6154 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 11:40:54.299250 6154 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:40:54.299337 6154 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 11:40:54.299719 6154 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:40:54.299899 6154 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 11:40:54.299923 6154 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:54.299953 6154 factory.go:656] Stopping watch factory\\\\nI0122 11:40:54.299985 6154 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:54.300031 6154 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:54.300052 6154 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.075328 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.088950 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.100912 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.115969 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.123026 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.123060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.123068 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.123085 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.123094 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.225293 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.225336 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.225351 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.225373 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.225390 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.327976 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.328027 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.328038 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.328055 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.328066 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.447719 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.447758 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.447766 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.447783 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.447793 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.550253 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.550298 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.550310 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.550326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.550339 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.652938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.652971 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.652981 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.652998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.653010 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.675579 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:52:43.20206447 +0000 UTC Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.755691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.755742 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.755757 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.755818 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.755836 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.858184 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.858234 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.858246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.858265 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.858276 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.915430 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/0.log" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.919045 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.919439 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.931711 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.942283 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.958629 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:54Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298122 6154 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298314 6154 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 11:40:54.299178 6154 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:40:54.299220 6154 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 11:40:54.299250 6154 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:40:54.299337 6154 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 11:40:54.299719 6154 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:40:54.299899 6154 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 11:40:54.299923 6154 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:54.299953 6154 factory.go:656] Stopping watch factory\\\\nI0122 11:40:54.299985 6154 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:54.300031 6154 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:54.300052 6154 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.960725 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.960765 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.960777 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.960795 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.960808 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:55Z","lastTransitionTime":"2026-01-22T11:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.977207 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:55 crc kubenswrapper[4874]: I0122 11:40:55.986733 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.000022 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:55Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.012364 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.022926 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.033865 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.044157 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.054316 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.063240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.063293 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.063304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.063319 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.063329 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.067195 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.085387 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.098337 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.111713 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.165901 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.165940 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.165949 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.165964 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.165974 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.268886 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.268915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.268924 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.268938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.268949 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.371769 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.371807 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.371816 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.371829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.371840 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.474137 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.474181 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.474189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.474203 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.474212 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.577672 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.577727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.577743 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.577766 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.577783 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.675748 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:18:52.744216364 +0000 UTC Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.680647 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.680678 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.680687 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.680702 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.680713 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.716132 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.716495 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.716489 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:56 crc kubenswrapper[4874]: E0122 11:40:56.716361 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:40:56 crc kubenswrapper[4874]: E0122 11:40:56.717694 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:40:56 crc kubenswrapper[4874]: E0122 11:40:56.717975 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.728919 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.741689 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.752975 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.765918 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.779927 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.782764 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.783060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.784066 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.784216 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.784347 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.801002 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.816989 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.830643 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.843358 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.854771 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.871001 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:54Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298122 6154 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298314 6154 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 11:40:54.299178 6154 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:40:54.299220 6154 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 11:40:54.299250 6154 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:40:54.299337 6154 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 11:40:54.299719 6154 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:40:54.299899 6154 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 11:40:54.299923 6154 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:54.299953 6154 factory.go:656] Stopping watch factory\\\\nI0122 11:40:54.299985 6154 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:54.300031 6154 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:54.300052 6154 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.884719 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.886335 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.886431 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.886447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.886465 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.886500 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.899009 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.914796 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.922903 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/1.log" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.923510 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/0.log" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.925955 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9" exitCode=1 Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.925995 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.926044 4874 scope.go:117] "RemoveContainer" containerID="72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.926643 4874 scope.go:117] "RemoveContainer" containerID="6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.926698 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: E0122 11:40:56.926909 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.938302 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.955712 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.968568 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.980697 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.988929 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.988968 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.988986 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.989002 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.989014 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:56Z","lastTransitionTime":"2026-01-22T11:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:56 crc kubenswrapper[4874]: I0122 11:40:56.992819 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.003334 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.022518 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:54Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298122 6154 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298314 6154 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 11:40:54.299178 6154 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:40:54.299220 6154 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 11:40:54.299250 6154 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:40:54.299337 6154 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 11:40:54.299719 6154 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:40:54.299899 6154 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 11:40:54.299923 6154 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:54.299953 6154 factory.go:656] Stopping watch factory\\\\nI0122 11:40:54.299985 6154 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:54.300031 6154 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:54.300052 6154 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:56Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0122 11:40:55.854329 6276 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:55.854364 6276 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 11:40:55.854411 6276 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 11:40:55.854421 6276 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:55.854565 6276 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854637 6276 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 11:40:55.854653 6276 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:55.854670 6276 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 11:40:55.854674 6276 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 11:40:55.854687 6276 factory.go:656] Stopping watch factory\\\\nI0122 11:40:55.854697 6276 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:55.854712 6276 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854735 6276 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 11:40:55.854740 6276 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 11:40:55.854749 6276 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0122 11:40:55.854804 6276 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.033439 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.063025 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.075775 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.090888 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5"] Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.091319 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.092121 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.092158 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.092168 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.092186 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.092198 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.093793 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.093870 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.095132 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.105005 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.115156 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.125066 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.136477 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.145634 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.154211 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.164129 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.175095 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.191738 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.194630 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.194732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.194746 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.194768 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.194790 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.207069 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.221969 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a8470cb5-cfaf-4760-8c07-ce375052950f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.222041 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a8470cb5-cfaf-4760-8c07-ce375052950f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.222067 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a8470cb5-cfaf-4760-8c07-ce375052950f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.222252 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9smw\" (UniqueName: \"kubernetes.io/projected/a8470cb5-cfaf-4760-8c07-ce375052950f-kube-api-access-t9smw\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.222705 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.235657 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.250209 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.269930 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:54Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298122 6154 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298314 6154 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 11:40:54.299178 6154 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:40:54.299220 6154 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 11:40:54.299250 6154 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:40:54.299337 6154 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 11:40:54.299719 6154 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:40:54.299899 6154 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 11:40:54.299923 6154 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:54.299953 6154 factory.go:656] Stopping watch factory\\\\nI0122 11:40:54.299985 6154 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:54.300031 6154 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:54.300052 6154 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:56Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0122 11:40:55.854329 6276 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:55.854364 6276 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 11:40:55.854411 6276 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 11:40:55.854421 6276 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:55.854565 6276 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854637 6276 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 11:40:55.854653 6276 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:55.854670 6276 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 11:40:55.854674 6276 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 11:40:55.854687 6276 factory.go:656] Stopping watch factory\\\\nI0122 11:40:55.854697 6276 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:55.854712 6276 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854735 6276 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 11:40:55.854740 6276 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 11:40:55.854749 6276 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0122 11:40:55.854804 6276 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.284707 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.297132 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.297187 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.297203 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.297223 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.297246 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.303527 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.323989 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a8470cb5-cfaf-4760-8c07-ce375052950f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.324053 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a8470cb5-cfaf-4760-8c07-ce375052950f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.324100 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9smw\" (UniqueName: \"kubernetes.io/projected/a8470cb5-cfaf-4760-8c07-ce375052950f-kube-api-access-t9smw\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.324167 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a8470cb5-cfaf-4760-8c07-ce375052950f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.324599 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a8470cb5-cfaf-4760-8c07-ce375052950f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.325164 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a8470cb5-cfaf-4760-8c07-ce375052950f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.328707 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.336314 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a8470cb5-cfaf-4760-8c07-ce375052950f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.352337 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.357928 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9smw\" (UniqueName: \"kubernetes.io/projected/a8470cb5-cfaf-4760-8c07-ce375052950f-kube-api-access-t9smw\") pod \"ovnkube-control-plane-749d76644c-x5vd5\" (UID: \"a8470cb5-cfaf-4760-8c07-ce375052950f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.376039 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.387374 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.400012 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.400053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.400061 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.400074 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.400090 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.402190 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" Jan 22 11:40:57 crc kubenswrapper[4874]: W0122 11:40:57.411988 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8470cb5_cfaf_4760_8c07_ce375052950f.slice/crio-14cea2c3844143377a5f42b7666cebbe3d097f531368e488b35f8d214024a66f WatchSource:0}: Error finding container 14cea2c3844143377a5f42b7666cebbe3d097f531368e488b35f8d214024a66f: Status 404 returned error can't find the container with id 14cea2c3844143377a5f42b7666cebbe3d097f531368e488b35f8d214024a66f Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.452340 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.463946 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.474002 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.483092 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.492342 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.503005 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.503041 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.503050 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.503064 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.503075 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.504262 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.521347 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.532730 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.542907 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.556056 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.567591 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.585137 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:54Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298122 6154 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298314 6154 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 11:40:54.299178 6154 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:40:54.299220 6154 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 11:40:54.299250 6154 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:40:54.299337 6154 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 11:40:54.299719 6154 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:40:54.299899 6154 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 11:40:54.299923 6154 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:54.299953 6154 factory.go:656] Stopping watch factory\\\\nI0122 11:40:54.299985 6154 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:54.300031 6154 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:54.300052 6154 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:56Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0122 11:40:55.854329 6276 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:55.854364 6276 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 11:40:55.854411 6276 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 11:40:55.854421 6276 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:55.854565 6276 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854637 6276 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 11:40:55.854653 6276 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:55.854670 6276 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 11:40:55.854674 6276 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 11:40:55.854687 6276 factory.go:656] Stopping watch factory\\\\nI0122 11:40:55.854697 6276 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:55.854712 6276 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854735 6276 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 11:40:55.854740 6276 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 11:40:55.854749 6276 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0122 11:40:55.854804 6276 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.596552 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.605283 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.605320 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.605328 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.605346 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.605356 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.609011 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.621810 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.634905 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.647959 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.675993 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:19:17.694919958 +0000 UTC Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.708173 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.708200 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.708208 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.708220 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.708228 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.746776 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.746819 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.746831 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.746848 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.746859 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: E0122 11:40:57.758091 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.761454 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.761500 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.761512 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.761530 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.761541 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: E0122 11:40:57.774575 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.782836 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.782895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.782910 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.782931 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.782947 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: E0122 11:40:57.793985 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.797141 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.797177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.797186 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.797201 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.797213 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: E0122 11:40:57.808712 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.812783 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.812821 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.812834 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.812849 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.812859 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: E0122 11:40:57.825735 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: E0122 11:40:57.825911 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.827346 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.827384 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.827424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.827447 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.827460 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.929418 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.929480 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.929491 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.929532 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.929544 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:57Z","lastTransitionTime":"2026-01-22T11:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.931161 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" event={"ID":"a8470cb5-cfaf-4760-8c07-ce375052950f","Type":"ContainerStarted","Data":"abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.931263 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" event={"ID":"a8470cb5-cfaf-4760-8c07-ce375052950f","Type":"ContainerStarted","Data":"851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.931287 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" event={"ID":"a8470cb5-cfaf-4760-8c07-ce375052950f","Type":"ContainerStarted","Data":"14cea2c3844143377a5f42b7666cebbe3d097f531368e488b35f8d214024a66f"} Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.933074 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/1.log" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.936275 4874 scope.go:117] "RemoveContainer" containerID="6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9" Jan 22 11:40:57 crc kubenswrapper[4874]: E0122 11:40:57.936572 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.951676 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.964638 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.977632 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:57 crc kubenswrapper[4874]: I0122 11:40:57.988795 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:57Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.006482 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.019905 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.030317 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.031204 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.031299 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.031309 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.031331 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.031340 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.042133 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.057067 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.081068 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72124241a469696f769c8a49bed7fe7008d988d3139c7894e11129e25247d0c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:54Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298122 6154 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:40:54.298314 6154 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 11:40:54.299178 6154 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:40:54.299220 6154 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 11:40:54.299250 6154 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:40:54.299337 6154 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 11:40:54.299719 6154 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:40:54.299899 6154 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0122 11:40:54.299923 6154 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:54.299953 6154 factory.go:656] Stopping watch factory\\\\nI0122 11:40:54.299985 6154 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:54.300031 6154 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:54.300052 6154 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:56Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0122 11:40:55.854329 6276 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:55.854364 6276 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 11:40:55.854411 6276 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 11:40:55.854421 6276 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:55.854565 6276 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854637 6276 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 11:40:55.854653 6276 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:55.854670 6276 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 11:40:55.854674 6276 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 11:40:55.854687 6276 factory.go:656] Stopping watch factory\\\\nI0122 11:40:55.854697 6276 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:55.854712 6276 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854735 6276 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 11:40:55.854740 6276 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 11:40:55.854749 6276 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0122 11:40:55.854804 6276 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.092557 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.102963 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.115239 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.132875 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.134080 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.134140 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.134163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.134188 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.134206 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.146329 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.162981 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.191437 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.218229 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.234699 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.236433 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.236573 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.236660 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.236741 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.236825 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.250923 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.266990 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.285733 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.305973 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:56Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0122 11:40:55.854329 6276 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:55.854364 6276 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 11:40:55.854411 6276 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 11:40:55.854421 6276 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:55.854565 6276 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854637 6276 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 11:40:55.854653 6276 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:55.854670 6276 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 11:40:55.854674 6276 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 11:40:55.854687 6276 factory.go:656] Stopping watch factory\\\\nI0122 11:40:55.854697 6276 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:55.854712 6276 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854735 6276 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 11:40:55.854740 6276 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 11:40:55.854749 6276 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0122 11:40:55.854804 6276 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.321102 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.334420 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.339037 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.339073 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.339088 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.339106 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.339118 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.350154 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.364933 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.386759 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.427922 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.440948 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.440996 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.441005 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.441020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.441031 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.466933 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.507333 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.527225 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lr2vd"] Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.527858 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:40:58 crc kubenswrapper[4874]: E0122 11:40:58.527924 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.543652 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.543912 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.544020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.544133 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.544214 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.546324 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.593687 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.633467 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.638310 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.638356 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzdj\" (UniqueName: \"kubernetes.io/projected/5451fbab-ebad-42e7-bb80-f94bad10d571-kube-api-access-hmzdj\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.647638 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.647698 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.647721 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.647754 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.647776 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.672689 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.676968 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:28:53.114753414 +0000 UTC Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.712159 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.715503 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.715503 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:40:58 crc kubenswrapper[4874]: E0122 11:40:58.715721 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:40:58 crc kubenswrapper[4874]: E0122 11:40:58.715782 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.715520 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:40:58 crc kubenswrapper[4874]: E0122 11:40:58.715968 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.739174 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.739210 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmzdj\" (UniqueName: \"kubernetes.io/projected/5451fbab-ebad-42e7-bb80-f94bad10d571-kube-api-access-hmzdj\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:40:58 crc kubenswrapper[4874]: E0122 11:40:58.739486 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:40:58 crc kubenswrapper[4874]: E0122 11:40:58.739717 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs podName:5451fbab-ebad-42e7-bb80-f94bad10d571 nodeName:}" failed. No retries permitted until 2026-01-22 11:40:59.239615069 +0000 UTC m=+33.084686179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs") pod "network-metrics-daemon-lr2vd" (UID: "5451fbab-ebad-42e7-bb80-f94bad10d571") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.746840 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.750482 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.750511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.750520 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.750534 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.750543 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.776444 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmzdj\" (UniqueName: \"kubernetes.io/projected/5451fbab-ebad-42e7-bb80-f94bad10d571-kube-api-access-hmzdj\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.812623 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.853225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.853273 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.853290 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.853309 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.853322 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.853821 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.901818 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:56Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0122 11:40:55.854329 6276 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:55.854364 6276 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 11:40:55.854411 6276 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 11:40:55.854421 6276 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:55.854565 6276 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854637 6276 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 11:40:55.854653 6276 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:55.854670 6276 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 11:40:55.854674 6276 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 11:40:55.854687 6276 factory.go:656] Stopping watch factory\\\\nI0122 11:40:55.854697 6276 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:55.854712 6276 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854735 6276 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 11:40:55.854740 6276 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 11:40:55.854749 6276 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0122 11:40:55.854804 6276 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.927803 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.955234 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.955271 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.955280 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.955295 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.955304 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:58Z","lastTransitionTime":"2026-01-22T11:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:58 crc kubenswrapper[4874]: I0122 11:40:58.966828 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:58Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.010063 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:59Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.044291 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:59Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.058020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.058075 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.058088 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.058104 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.058116 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.086904 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:59Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.125424 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:59Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.160761 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.160817 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.160830 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.160845 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.160855 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.167133 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:59Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.203900 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:59Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.244255 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:40:59 crc kubenswrapper[4874]: E0122 11:40:59.244456 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:40:59 crc kubenswrapper[4874]: E0122 11:40:59.244513 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs podName:5451fbab-ebad-42e7-bb80-f94bad10d571 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:00.244496279 +0000 UTC m=+34.089567349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs") pod "network-metrics-daemon-lr2vd" (UID: "5451fbab-ebad-42e7-bb80-f94bad10d571") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.248623 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:40:59Z is after 2025-08-24T17:21:41Z" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.262992 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.263040 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.263052 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.263067 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.263076 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.366163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.366220 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.366232 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.366245 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.366256 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.468688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.468746 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.468756 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.468771 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.468781 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.571668 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.571720 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.571735 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.571754 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.571770 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.674256 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.674323 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.674342 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.674366 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.674382 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.677584 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:41:10.586831957 +0000 UTC Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.715473 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:40:59 crc kubenswrapper[4874]: E0122 11:40:59.715655 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.777044 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.777110 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.777126 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.777147 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.777164 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.880074 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.880112 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.880126 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.880220 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.880273 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.983655 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.983709 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.983723 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.983737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:40:59 crc kubenswrapper[4874]: I0122 11:40:59.983747 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:40:59Z","lastTransitionTime":"2026-01-22T11:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.086434 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.086482 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.086495 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.086513 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.086525 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:00Z","lastTransitionTime":"2026-01-22T11:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.190187 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.190249 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.190265 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.190288 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.190305 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:00Z","lastTransitionTime":"2026-01-22T11:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.254663 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.254838 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.254910 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs podName:5451fbab-ebad-42e7-bb80-f94bad10d571 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:02.254886969 +0000 UTC m=+36.099958079 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs") pod "network-metrics-daemon-lr2vd" (UID: "5451fbab-ebad-42e7-bb80-f94bad10d571") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.292885 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.292961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.292984 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.293013 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.293036 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:00Z","lastTransitionTime":"2026-01-22T11:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.396554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.396618 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.396636 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.396660 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.396679 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:00Z","lastTransitionTime":"2026-01-22T11:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.499503 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.499547 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.499558 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.499574 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.499584 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:00Z","lastTransitionTime":"2026-01-22T11:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.557928 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.558095 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:41:16.558066673 +0000 UTC m=+50.403137783 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.558141 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.558192 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.558229 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.558477 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.558513 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.558533 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.558581 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.558587 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:16.558571799 +0000 UTC m=+50.403642909 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.558668 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:16.558649762 +0000 UTC m=+50.403720862 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.558720 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.558762 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:16.558747195 +0000 UTC m=+50.403818305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.602734 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.602772 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.602787 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.602805 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.602816 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:00Z","lastTransitionTime":"2026-01-22T11:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.660039 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.660366 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.660461 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.660490 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.660588 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:16.660555083 +0000 UTC m=+50.505626193 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.677894 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:14:02.824167179 +0000 UTC Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.706084 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.706160 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.706185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.706261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.706286 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:00Z","lastTransitionTime":"2026-01-22T11:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.716107 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.716114 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.716330 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.716316 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.716473 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:00 crc kubenswrapper[4874]: E0122 11:41:00.716556 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.809478 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.809543 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.809562 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.809588 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.809605 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:00Z","lastTransitionTime":"2026-01-22T11:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.913219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.913298 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.913321 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.913354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:00 crc kubenswrapper[4874]: I0122 11:41:00.913378 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:00Z","lastTransitionTime":"2026-01-22T11:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.016312 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.016361 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.016375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.016414 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.016430 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.118821 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.118882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.118900 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.118924 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.118940 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.221657 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.221707 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.221720 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.221742 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.221755 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.324282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.324346 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.324359 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.324378 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.324449 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.426940 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.426983 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.427018 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.427038 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.427049 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.530373 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.530482 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.530505 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.530538 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.530560 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.634070 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.634127 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.634140 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.634159 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.634173 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.677982 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:39:45.492806958 +0000 UTC Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.715952 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:01 crc kubenswrapper[4874]: E0122 11:41:01.716213 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.736686 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.737055 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.737554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.737774 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.738279 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.840903 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.840972 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.840991 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.841014 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.841033 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.944037 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.944114 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.944139 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.944174 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:01 crc kubenswrapper[4874]: I0122 11:41:01.944246 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:01Z","lastTransitionTime":"2026-01-22T11:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.047080 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.047149 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.047176 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.047206 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.047226 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.151038 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.151099 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.151121 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.151149 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.151174 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.254955 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.255008 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.255020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.255038 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.255050 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.278057 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:02 crc kubenswrapper[4874]: E0122 11:41:02.278278 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:02 crc kubenswrapper[4874]: E0122 11:41:02.278369 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs podName:5451fbab-ebad-42e7-bb80-f94bad10d571 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:06.278351845 +0000 UTC m=+40.123422915 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs") pod "network-metrics-daemon-lr2vd" (UID: "5451fbab-ebad-42e7-bb80-f94bad10d571") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.358115 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.358360 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.358462 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.358595 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.358722 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.462042 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.462100 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.462118 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.462142 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.462159 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.564896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.565190 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.565288 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.565383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.565504 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.668805 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.669117 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.669253 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.669375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.669528 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.679162 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:05:55.943411946 +0000 UTC Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.715499 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.715523 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:02 crc kubenswrapper[4874]: E0122 11:41:02.715614 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:02 crc kubenswrapper[4874]: E0122 11:41:02.715691 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.715775 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:02 crc kubenswrapper[4874]: E0122 11:41:02.715840 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.771987 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.772023 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.772033 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.772047 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.772056 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.874265 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.874318 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.874330 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.874342 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.874351 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.976574 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.976835 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.976924 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.977047 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:02 crc kubenswrapper[4874]: I0122 11:41:02.977141 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:02Z","lastTransitionTime":"2026-01-22T11:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.079530 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.079569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.079582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.079600 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.079612 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:03Z","lastTransitionTime":"2026-01-22T11:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.182323 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.182377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.182419 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.182444 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.182461 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:03Z","lastTransitionTime":"2026-01-22T11:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.286363 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.286457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.286476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.286504 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.286522 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:03Z","lastTransitionTime":"2026-01-22T11:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.389181 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.389205 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.389212 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.389226 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.389236 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:03Z","lastTransitionTime":"2026-01-22T11:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.491314 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.491623 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.491695 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.491774 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.491845 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:03Z","lastTransitionTime":"2026-01-22T11:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.593783 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.593815 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.593824 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.593837 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.593846 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:03Z","lastTransitionTime":"2026-01-22T11:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.679451 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:03:19.94719951 +0000 UTC Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.696368 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.696424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.696439 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.696460 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.696476 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:03Z","lastTransitionTime":"2026-01-22T11:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.715903 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:03 crc kubenswrapper[4874]: E0122 11:41:03.716067 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.798954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.799191 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.799271 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.799347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.799475 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:03Z","lastTransitionTime":"2026-01-22T11:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.902467 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.902538 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.902563 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.902592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:03 crc kubenswrapper[4874]: I0122 11:41:03.902614 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:03Z","lastTransitionTime":"2026-01-22T11:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.006152 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.006228 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.006251 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.006280 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.006301 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.109604 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.109658 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.109676 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.109701 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.109720 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.212991 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.213055 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.213072 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.213097 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.213113 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.316487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.316554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.316572 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.316595 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.316612 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.419381 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.419436 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.419448 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.419464 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.419474 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.522992 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.523081 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.523119 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.523148 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.523170 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.626239 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.626291 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.626299 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.626314 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.626333 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.680173 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:45:23.711531321 +0000 UTC Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.715731 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.715760 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.715889 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:04 crc kubenswrapper[4874]: E0122 11:41:04.715999 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:04 crc kubenswrapper[4874]: E0122 11:41:04.716159 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:04 crc kubenswrapper[4874]: E0122 11:41:04.716329 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.728240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.728271 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.728281 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.728293 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.728303 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.830529 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.830588 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.830605 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.830625 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.830640 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.932627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.932966 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.933082 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.933193 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:04 crc kubenswrapper[4874]: I0122 11:41:04.933294 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:04Z","lastTransitionTime":"2026-01-22T11:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.036575 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.036617 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.036627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.036642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.036654 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.139927 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.139998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.140023 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.140048 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.140065 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.243073 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.243601 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.243766 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.243953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.244139 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.347284 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.347323 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.347332 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.347349 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.347360 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.449592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.449639 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.449652 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.449671 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.449681 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.551310 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.551371 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.551389 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.551435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.551451 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.654136 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.654184 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.654197 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.654213 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.654223 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.680605 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:31:19.570688528 +0000 UTC Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.715962 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:05 crc kubenswrapper[4874]: E0122 11:41:05.716090 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.756673 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.756707 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.756716 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.756732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.756742 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.859231 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.859535 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.859620 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.859696 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.859803 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.965476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.965523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.965539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.965555 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:05 crc kubenswrapper[4874]: I0122 11:41:05.965565 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:05Z","lastTransitionTime":"2026-01-22T11:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.068161 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.068374 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.068485 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.068606 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.068726 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.172247 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.172304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.172320 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.172343 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.172362 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.275827 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.275904 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.275927 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.275956 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.275977 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.322571 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:06 crc kubenswrapper[4874]: E0122 11:41:06.322786 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:06 crc kubenswrapper[4874]: E0122 11:41:06.322857 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs podName:5451fbab-ebad-42e7-bb80-f94bad10d571 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:14.322839408 +0000 UTC m=+48.167910489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs") pod "network-metrics-daemon-lr2vd" (UID: "5451fbab-ebad-42e7-bb80-f94bad10d571") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.378111 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.378190 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.378215 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.378239 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.378260 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.480688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.480722 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.480730 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.480745 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.480754 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.583713 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.583776 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.583793 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.583861 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.583880 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.681663 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 15:59:34.327535777 +0000 UTC Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.687019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.687070 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.687082 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.687096 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.687106 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.715826 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:06 crc kubenswrapper[4874]: E0122 11:41:06.715961 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.715838 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.716011 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:06 crc kubenswrapper[4874]: E0122 11:41:06.716057 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:06 crc kubenswrapper[4874]: E0122 11:41:06.716165 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.728323 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.744098 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.757230 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.772186 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.789889 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.789922 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.789930 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.789944 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.789954 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.792672 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.828252 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.846452 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.865618 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.880106 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.892483 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.892541 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.892558 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.892581 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.892599 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.901687 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.916989 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.934414 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:56Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0122 11:40:55.854329 6276 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:55.854364 6276 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 11:40:55.854411 6276 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 11:40:55.854421 6276 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:55.854565 6276 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854637 6276 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 11:40:55.854653 6276 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:55.854670 6276 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 11:40:55.854674 6276 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 11:40:55.854687 6276 factory.go:656] Stopping watch factory\\\\nI0122 11:40:55.854697 6276 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:55.854712 6276 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854735 6276 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 11:40:55.854740 6276 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 11:40:55.854749 6276 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0122 11:40:55.854804 6276 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.946211 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.984460 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.994949 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.995003 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.995017 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.995035 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:06 crc kubenswrapper[4874]: I0122 11:41:06.995047 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:06Z","lastTransitionTime":"2026-01-22T11:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.000994 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:06Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.023639 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:07Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.038797 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:07Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.098142 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.098211 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.098232 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.098261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.098283 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.201492 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.201539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.201551 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.201570 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.201581 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.304908 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.304966 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.304986 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.305015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.305038 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.408132 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.408171 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.408184 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.408202 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.408217 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.511214 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.511656 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.511887 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.512146 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.512346 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.615657 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.615969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.616148 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.616297 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.616534 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.682148 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:37:06.881433082 +0000 UTC Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.715896 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:07 crc kubenswrapper[4874]: E0122 11:41:07.716102 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.720678 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.720772 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.720798 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.720858 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.720888 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.823474 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.823528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.823546 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.823569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.823588 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.873320 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.873353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.873364 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.873378 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.873389 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: E0122 11:41:07.894054 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:07Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.898972 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.899014 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.899024 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.899041 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.899053 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: E0122 11:41:07.913738 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:07Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.919168 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.919222 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.919238 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.919263 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.919282 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: E0122 11:41:07.938455 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:07Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.943336 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.943516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.943539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.943558 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.943574 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: E0122 11:41:07.961888 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:07Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.967283 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.967354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.967371 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.967424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.967444 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:07 crc kubenswrapper[4874]: E0122 11:41:07.987285 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:07Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:07 crc kubenswrapper[4874]: E0122 11:41:07.987553 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.991153 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.991244 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.991262 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.991287 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:07 crc kubenswrapper[4874]: I0122 11:41:07.991304 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:07Z","lastTransitionTime":"2026-01-22T11:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.094057 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.094365 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.094662 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.095083 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.095623 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:08Z","lastTransitionTime":"2026-01-22T11:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.198730 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.199082 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.199362 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.199599 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.199736 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:08Z","lastTransitionTime":"2026-01-22T11:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.302515 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.302564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.302586 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.302613 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.302635 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:08Z","lastTransitionTime":"2026-01-22T11:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.405791 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.405837 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.405853 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.405875 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.405891 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:08Z","lastTransitionTime":"2026-01-22T11:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.509254 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.509339 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.509363 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.509386 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.509428 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:08Z","lastTransitionTime":"2026-01-22T11:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.611932 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.611976 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.611993 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.612014 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.612031 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:08Z","lastTransitionTime":"2026-01-22T11:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.682950 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:32:20.683985974 +0000 UTC Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.715612 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.715658 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.715897 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:08 crc kubenswrapper[4874]: E0122 11:41:08.715934 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.715954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.715995 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.716019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.716037 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:08Z","lastTransitionTime":"2026-01-22T11:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:08 crc kubenswrapper[4874]: E0122 11:41:08.716059 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.716558 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:08 crc kubenswrapper[4874]: E0122 11:41:08.716700 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.819232 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.819285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.819296 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.819313 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.819324 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:08Z","lastTransitionTime":"2026-01-22T11:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.922465 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.922545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.922564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.922590 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:08 crc kubenswrapper[4874]: I0122 11:41:08.922608 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:08Z","lastTransitionTime":"2026-01-22T11:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.026379 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.026468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.026486 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.026510 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.026527 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.128952 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.129024 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.129059 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.129108 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.129146 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.232488 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.232549 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.232565 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.232590 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.232607 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.334907 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.334969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.334984 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.335000 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.335011 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.437059 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.437097 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.437109 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.437127 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.437140 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.539922 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.539983 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.539999 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.540023 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.540040 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.643961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.644022 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.644044 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.644076 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.644099 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.683972 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:35:35.304586327 +0000 UTC Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.715510 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:09 crc kubenswrapper[4874]: E0122 11:41:09.715765 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.746743 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.746806 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.746829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.746854 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.746873 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.850650 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.850696 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.850708 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.850727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.850742 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.952905 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.952943 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.952953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.952967 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:09 crc kubenswrapper[4874]: I0122 11:41:09.952990 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:09Z","lastTransitionTime":"2026-01-22T11:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.055095 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.055146 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.055157 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.055175 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.055185 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.157828 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.157864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.157876 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.157891 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.157901 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.260795 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.260893 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.260912 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.261377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.261625 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.364706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.364756 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.364770 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.364792 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.364807 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.467905 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.468200 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.468479 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.468517 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.468530 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.570883 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.570932 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.570950 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.570972 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.570991 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.674041 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.674094 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.674113 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.674137 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.674154 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.686102 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:29:11.94413328 +0000 UTC Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.715886 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.715878 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.715915 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:10 crc kubenswrapper[4874]: E0122 11:41:10.716027 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.716725 4874 scope.go:117] "RemoveContainer" containerID="6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9" Jan 22 11:41:10 crc kubenswrapper[4874]: E0122 11:41:10.716719 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:10 crc kubenswrapper[4874]: E0122 11:41:10.716970 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.778258 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.778318 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.778336 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.778362 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.778380 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.881472 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.882002 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.882021 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.883216 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.884043 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.986872 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.986901 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.986911 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.986923 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:10 crc kubenswrapper[4874]: I0122 11:41:10.986933 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:10Z","lastTransitionTime":"2026-01-22T11:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.003288 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/1.log" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.005512 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.006673 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.016602 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.027578 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.039903 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.050929 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.062212 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.077488 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.089488 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.089798 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.089830 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.089842 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.089858 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.089871 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:11Z","lastTransitionTime":"2026-01-22T11:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.107949 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.132853 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.166977 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:56Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0122 11:40:55.854329 6276 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:55.854364 6276 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 11:40:55.854411 6276 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 11:40:55.854421 6276 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:55.854565 6276 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854637 6276 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 11:40:55.854653 6276 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:55.854670 6276 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 11:40:55.854674 6276 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 11:40:55.854687 6276 factory.go:656] Stopping watch factory\\\\nI0122 11:40:55.854697 6276 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:55.854712 6276 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854735 6276 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 11:40:55.854740 6276 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 11:40:55.854749 6276 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0122 11:40:55.854804 6276 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.188079 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.192014 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.192041 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.192051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.192066 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.192075 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:11Z","lastTransitionTime":"2026-01-22T11:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.201342 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.212258 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.229637 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.252409 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.271471 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.284677 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:11Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.294156 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.294194 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.294203 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.294217 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.294225 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:11Z","lastTransitionTime":"2026-01-22T11:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.396274 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.396316 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.396328 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.396343 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.396355 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:11Z","lastTransitionTime":"2026-01-22T11:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.499210 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.499255 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.499271 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.499291 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.499308 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:11Z","lastTransitionTime":"2026-01-22T11:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.602034 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.602080 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.602091 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.602108 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.602121 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:11Z","lastTransitionTime":"2026-01-22T11:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.687454 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:32:38.566116847 +0000 UTC Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.710516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.710615 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.710633 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.710656 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.710672 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:11Z","lastTransitionTime":"2026-01-22T11:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.715204 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:11 crc kubenswrapper[4874]: E0122 11:41:11.715384 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.813501 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.813569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.813588 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.813613 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.813631 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:11Z","lastTransitionTime":"2026-01-22T11:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.917312 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.917369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.917388 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.917461 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:11 crc kubenswrapper[4874]: I0122 11:41:11.917480 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:11Z","lastTransitionTime":"2026-01-22T11:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.012449 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/2.log" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.013450 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/1.log" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.018228 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337" exitCode=1 Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.018283 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.018370 4874 scope.go:117] "RemoveContainer" containerID="6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.019334 4874 scope.go:117] "RemoveContainer" containerID="fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337" Jan 22 11:41:12 crc kubenswrapper[4874]: E0122 11:41:12.019677 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.019825 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.019895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.019922 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.019954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.019980 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.064227 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.078469 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.095360 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.105866 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.120982 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.122669 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.122702 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.122710 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.122725 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.122736 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.136182 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.156509 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.170306 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.186567 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.201270 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.220668 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.224120 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.224154 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.224163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.224177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.224187 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.235194 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.252807 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.264845 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.279442 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.292327 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.309292 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cef98ffee64a9c18ee659084940bd115933c3b805e846f3798a4af2a27cb0c9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:40:56Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0122 11:40:55.854329 6276 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:40:55.854364 6276 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0122 11:40:55.854411 6276 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0122 11:40:55.854421 6276 handler.go:208] Removed *v1.Node event handler 7\\\\nI0122 11:40:55.854565 6276 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854637 6276 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0122 11:40:55.854653 6276 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0122 11:40:55.854670 6276 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0122 11:40:55.854674 6276 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0122 11:40:55.854687 6276 factory.go:656] Stopping watch factory\\\\nI0122 11:40:55.854697 6276 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:40:55.854712 6276 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0122 11:40:55.854735 6276 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0122 11:40:55.854740 6276 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0122 11:40:55.854749 6276 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0122 11:40:55.854804 6276 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:12Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.327526 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.327564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.327573 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.327587 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.327596 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.430348 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.430451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.430476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.430502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.430518 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.533053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.533141 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.533163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.533190 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.533207 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.636269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.636329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.636345 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.636368 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.636438 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.687868 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:44:03.324922232 +0000 UTC Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.715653 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.715668 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.715769 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:12 crc kubenswrapper[4874]: E0122 11:41:12.715850 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:12 crc kubenswrapper[4874]: E0122 11:41:12.715913 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:12 crc kubenswrapper[4874]: E0122 11:41:12.715937 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.739114 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.739158 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.739169 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.739187 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.739199 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.840915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.840958 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.840966 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.840982 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.840992 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.944123 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.944195 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.944218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.944249 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:12 crc kubenswrapper[4874]: I0122 11:41:12.944268 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:12Z","lastTransitionTime":"2026-01-22T11:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.025609 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/2.log" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.032290 4874 scope.go:117] "RemoveContainer" containerID="fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337" Jan 22 11:41:13 crc kubenswrapper[4874]: E0122 11:41:13.032652 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.048207 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.048275 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.048298 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.048330 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.048353 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.056393 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.077712 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.110454 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.128795 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.145298 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.151771 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.151834 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.151852 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.151876 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.151894 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.164674 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.178335 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.195610 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.211118 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.229934 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.241857 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.253981 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.254035 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.254048 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.254067 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.254080 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.257515 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.275738 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.302993 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.316975 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.333802 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.349493 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:13Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.356208 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.356246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.356259 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.356275 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.356286 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.459182 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.459259 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.459282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.459314 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.459337 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.563463 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.563507 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.563550 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.563576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.563592 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.666369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.666434 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.666444 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.666457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.666467 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.688690 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:48:04.703006194 +0000 UTC Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.715254 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:13 crc kubenswrapper[4874]: E0122 11:41:13.715428 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.770028 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.770107 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.770130 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.770159 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.770182 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.873503 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.873553 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.873564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.873578 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.873587 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.976714 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.976778 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.976796 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.976820 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:13 crc kubenswrapper[4874]: I0122 11:41:13.976849 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:13Z","lastTransitionTime":"2026-01-22T11:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.079456 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.079518 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.079540 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.079573 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.079596 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:14Z","lastTransitionTime":"2026-01-22T11:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.182818 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.182887 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.182911 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.182941 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.182963 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:14Z","lastTransitionTime":"2026-01-22T11:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.285944 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.285995 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.286011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.286033 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.286061 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:14Z","lastTransitionTime":"2026-01-22T11:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.387882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.387934 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.387942 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.387954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.388001 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:14Z","lastTransitionTime":"2026-01-22T11:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.412447 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:14 crc kubenswrapper[4874]: E0122 11:41:14.412587 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:14 crc kubenswrapper[4874]: E0122 11:41:14.412638 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs podName:5451fbab-ebad-42e7-bb80-f94bad10d571 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:30.412625253 +0000 UTC m=+64.257696313 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs") pod "network-metrics-daemon-lr2vd" (UID: "5451fbab-ebad-42e7-bb80-f94bad10d571") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.490262 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.490313 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.490329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.490350 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.490366 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:14Z","lastTransitionTime":"2026-01-22T11:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.606853 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.606958 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.606984 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.607013 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.607035 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:14Z","lastTransitionTime":"2026-01-22T11:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.688986 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:45:43.122171471 +0000 UTC Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.710260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.710322 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.710333 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.710347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.710359 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:14Z","lastTransitionTime":"2026-01-22T11:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.715858 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.715939 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:14 crc kubenswrapper[4874]: E0122 11:41:14.715995 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.716044 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:14 crc kubenswrapper[4874]: E0122 11:41:14.716266 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:14 crc kubenswrapper[4874]: E0122 11:41:14.716497 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.813250 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.813320 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.813340 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.813362 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.813379 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:14Z","lastTransitionTime":"2026-01-22T11:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.916420 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.916470 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.916480 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.916499 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:14 crc kubenswrapper[4874]: I0122 11:41:14.916512 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:14Z","lastTransitionTime":"2026-01-22T11:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.019344 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.019376 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.019384 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.019425 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.019434 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.122706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.122769 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.122844 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.122869 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.122886 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.225568 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.225654 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.225680 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.225710 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.225732 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.240179 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.257118 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.261058 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.277706 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.296634 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.316765 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.328844 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.329117 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.329252 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.329460 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.329847 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.349986 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.369360 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.385840 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.401129 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.423286 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.433468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.433519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.433533 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.433557 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.433574 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.439071 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.463840 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.484804 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.502955 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.518877 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.535740 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.535779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.535791 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.535806 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.535817 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.540343 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.554514 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.565089 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:15Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.638988 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.639171 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.639279 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.639384 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.639533 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.691693 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:04:11.154455496 +0000 UTC Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.716015 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:15 crc kubenswrapper[4874]: E0122 11:41:15.716156 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.742051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.742085 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.742097 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.742114 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.742126 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.844900 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.844975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.844989 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.845007 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.845030 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.948543 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.948595 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.948608 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.948626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:15 crc kubenswrapper[4874]: I0122 11:41:15.948640 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:15Z","lastTransitionTime":"2026-01-22T11:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.050520 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.050558 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.050569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.050586 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.050597 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.154072 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.154144 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.154160 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.154180 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.154195 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.257045 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.257087 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.257121 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.257139 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.257150 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.360149 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.360206 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.360226 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.360249 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.360266 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.463870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.463936 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.463953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.463985 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.464024 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.567800 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.567875 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.567906 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.567938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.567960 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.637707 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.637896 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:41:48.637859987 +0000 UTC m=+82.482931127 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.638020 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.638084 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.638137 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.638278 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.638348 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.638354 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.638366 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:48.638342183 +0000 UTC m=+82.483413283 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.638380 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.638983 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.639093 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:48.639061456 +0000 UTC m=+82.484132566 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.639200 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:48.63916747 +0000 UTC m=+82.484238590 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.670849 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.670910 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.670922 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.670940 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.670949 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.693317 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:23:41.263561064 +0000 UTC Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.715144 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.715219 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.715273 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.715442 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.715500 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.715671 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.735903 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d542969f-3655-4a4e-8a4d-238cff44f86e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.738716 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.738899 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.738918 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.738933 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:16 crc kubenswrapper[4874]: E0122 11:41:16.738986 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 11:41:48.738971854 +0000 UTC m=+82.584042944 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.759110 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.773855 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.773896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.773905 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.773937 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.773947 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.779739 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.799187 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.811834 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.824945 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.839994 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.854486 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.864808 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.876514 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.876543 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.876565 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.876579 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.876601 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.880619 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.892983 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.910560 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.922076 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.933549 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.942385 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.958905 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.972214 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.978559 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.978586 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.978595 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.978609 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.978619 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:16Z","lastTransitionTime":"2026-01-22T11:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:16 crc kubenswrapper[4874]: I0122 11:41:16.992426 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:16Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.080750 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.080809 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.080831 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.080861 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.080882 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:17Z","lastTransitionTime":"2026-01-22T11:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.183859 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.183910 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.183927 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.183951 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.183967 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:17Z","lastTransitionTime":"2026-01-22T11:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.286496 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.286533 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.286545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.286560 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.286570 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:17Z","lastTransitionTime":"2026-01-22T11:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.389229 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.389285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.389301 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.389325 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.389343 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:17Z","lastTransitionTime":"2026-01-22T11:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.492739 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.493142 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.493302 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.493491 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.493690 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:17Z","lastTransitionTime":"2026-01-22T11:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.597012 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.597358 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.597653 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.597938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.598167 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:17Z","lastTransitionTime":"2026-01-22T11:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.694465 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:30:45.555761875 +0000 UTC Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.701844 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.701915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.701938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.701969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.701991 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:17Z","lastTransitionTime":"2026-01-22T11:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.716057 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:17 crc kubenswrapper[4874]: E0122 11:41:17.716226 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.805275 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.805329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.805340 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.805360 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.805374 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:17Z","lastTransitionTime":"2026-01-22T11:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.908791 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.908842 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.908853 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.908870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:17 crc kubenswrapper[4874]: I0122 11:41:17.908882 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:17Z","lastTransitionTime":"2026-01-22T11:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.011714 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.011746 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.011754 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.011769 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.011780 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.115033 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.115076 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.115088 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.115104 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.115115 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.217928 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.217986 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.218078 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.218109 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.218121 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.298445 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.298509 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.298526 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.298550 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.298567 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: E0122 11:41:18.319993 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:18Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.324837 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.324861 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.324870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.324882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.324890 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: E0122 11:41:18.344258 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:18Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.348984 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.349043 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.349067 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.349095 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.349122 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: E0122 11:41:18.366384 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:18Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.370772 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.370819 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.370898 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.370936 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.370952 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: E0122 11:41:18.389430 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:18Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.393042 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.393085 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.393097 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.393116 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.393450 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: E0122 11:41:18.411755 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:18Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:18 crc kubenswrapper[4874]: E0122 11:41:18.411870 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.414089 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.414168 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.414186 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.414209 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.414227 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.517954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.518074 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.518098 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.518128 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.518155 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.621507 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.621559 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.621569 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.621583 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.621593 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.695555 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:21:48.652884682 +0000 UTC Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.716031 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.716092 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:18 crc kubenswrapper[4874]: E0122 11:41:18.716197 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.716278 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:18 crc kubenswrapper[4874]: E0122 11:41:18.716542 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:18 crc kubenswrapper[4874]: E0122 11:41:18.716284 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.724819 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.724902 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.724924 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.724959 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.724980 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.828374 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.828461 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.828477 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.828502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.828519 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.931004 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.931064 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.931082 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.931107 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:18 crc kubenswrapper[4874]: I0122 11:41:18.931132 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:18Z","lastTransitionTime":"2026-01-22T11:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.033995 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.034060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.034080 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.034110 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.034132 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.136824 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.136880 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.136897 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.136921 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.136952 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.240787 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.240843 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.240862 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.240884 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.240904 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.344227 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.344277 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.344289 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.344309 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.344320 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.447849 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.447926 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.447944 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.448455 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.448513 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.551690 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.551776 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.551804 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.551834 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.551856 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.655126 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.655197 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.655219 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.655249 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.655274 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.695831 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:16:59.969366602 +0000 UTC Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.715046 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:19 crc kubenswrapper[4874]: E0122 11:41:19.715189 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.760168 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.760233 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.760251 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.760276 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.760299 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.863856 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.863923 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.863947 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.863978 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.863997 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.966746 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.966829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.966852 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.966882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:19 crc kubenswrapper[4874]: I0122 11:41:19.966906 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:19Z","lastTransitionTime":"2026-01-22T11:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.070769 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.070836 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.070857 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.070881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.070898 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:20Z","lastTransitionTime":"2026-01-22T11:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.173933 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.173991 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.174012 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.174036 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.174052 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:20Z","lastTransitionTime":"2026-01-22T11:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.277272 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.277347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.277364 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.277393 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.277435 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:20Z","lastTransitionTime":"2026-01-22T11:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.381195 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.381241 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.381257 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.381278 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.381295 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:20Z","lastTransitionTime":"2026-01-22T11:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.484225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.484292 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.484315 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.484344 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.484365 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:20Z","lastTransitionTime":"2026-01-22T11:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.587805 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.587871 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.587890 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.587914 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.587931 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:20Z","lastTransitionTime":"2026-01-22T11:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.690661 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.690735 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.690760 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.690788 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.690810 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:20Z","lastTransitionTime":"2026-01-22T11:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.695940 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:36:43.461722688 +0000 UTC Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.715326 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.715550 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.715431 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:20 crc kubenswrapper[4874]: E0122 11:41:20.715698 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:20 crc kubenswrapper[4874]: E0122 11:41:20.715810 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:20 crc kubenswrapper[4874]: E0122 11:41:20.715926 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.793747 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.793782 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.793794 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.793810 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.793822 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:20Z","lastTransitionTime":"2026-01-22T11:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.896356 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.896580 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.896604 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.896628 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:20 crc kubenswrapper[4874]: I0122 11:41:20.896648 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:20Z","lastTransitionTime":"2026-01-22T11:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.001883 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.001960 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.001979 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.002001 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.002020 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.106173 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.106296 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.106327 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.106469 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.106573 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.210225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.210289 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.210306 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.210329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.210346 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.313473 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.313514 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.313526 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.313540 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.313549 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.417216 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.417275 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.417292 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.417314 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.417330 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.520778 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.520844 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.520861 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.520885 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.520902 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.623271 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.623308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.623319 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.623335 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.623371 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.696079 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:18:49.615358566 +0000 UTC Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.715669 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:21 crc kubenswrapper[4874]: E0122 11:41:21.715789 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.725452 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.725511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.725528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.725554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.725581 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.828791 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.828873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.828893 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.828916 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.828961 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.932374 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.932503 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.932527 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.932553 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:21 crc kubenswrapper[4874]: I0122 11:41:21.932572 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:21Z","lastTransitionTime":"2026-01-22T11:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.036224 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.036285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.036303 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.036327 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.036347 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.139275 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.139338 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.139356 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.139383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.139433 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.242643 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.243109 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.243269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.243482 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.243666 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.346961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.347010 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.347058 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.347077 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.347089 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.449260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.449322 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.449333 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.449351 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.449363 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.552506 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.552566 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.552578 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.552593 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.552604 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.654769 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.654819 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.654835 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.654855 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.654871 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.696603 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:52:31.354151941 +0000 UTC Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.715170 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.715244 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.715313 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:22 crc kubenswrapper[4874]: E0122 11:41:22.715433 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:22 crc kubenswrapper[4874]: E0122 11:41:22.715575 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:22 crc kubenswrapper[4874]: E0122 11:41:22.715687 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.757630 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.757670 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.757684 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.757703 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.757719 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.860691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.860724 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.860732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.860745 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.860754 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.963119 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.963185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.963202 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.963517 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:22 crc kubenswrapper[4874]: I0122 11:41:22.963540 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:22Z","lastTransitionTime":"2026-01-22T11:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.066877 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.066941 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.066963 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.066992 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.067013 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.170207 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.170294 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.170319 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.170359 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.170384 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.274048 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.274118 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.274137 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.274164 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.274187 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.378011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.378113 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.378141 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.378175 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.378216 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.481249 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.481683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.481841 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.481997 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.482134 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.585874 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.585925 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.585937 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.585957 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.585971 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.688465 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.688508 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.688520 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.688536 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.688547 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.696765 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:12:27.323434594 +0000 UTC Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.715494 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:23 crc kubenswrapper[4874]: E0122 11:41:23.715681 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.790225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.790286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.790299 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.790311 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.790323 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.893538 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.893586 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.893595 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.893608 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.893616 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.996154 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.996185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.996194 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.996206 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:23 crc kubenswrapper[4874]: I0122 11:41:23.996215 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:23Z","lastTransitionTime":"2026-01-22T11:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.097970 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.098033 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.098050 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.098074 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.098092 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:24Z","lastTransitionTime":"2026-01-22T11:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.201067 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.201131 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.201145 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.201180 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.201220 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:24Z","lastTransitionTime":"2026-01-22T11:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.304579 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.304663 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.304691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.304718 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.304737 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:24Z","lastTransitionTime":"2026-01-22T11:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.407988 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.408082 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.408107 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.408135 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.408156 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:24Z","lastTransitionTime":"2026-01-22T11:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.511738 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.511801 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.511817 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.511842 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.511859 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:24Z","lastTransitionTime":"2026-01-22T11:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.614896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.614967 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.614990 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.615015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.615030 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:24Z","lastTransitionTime":"2026-01-22T11:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.697495 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:39:51.88347177 +0000 UTC Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.716164 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.716276 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:24 crc kubenswrapper[4874]: E0122 11:41:24.716387 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:24 crc kubenswrapper[4874]: E0122 11:41:24.716533 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.716679 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:24 crc kubenswrapper[4874]: E0122 11:41:24.716784 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.718377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.718434 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.718463 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.718496 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.718509 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:24Z","lastTransitionTime":"2026-01-22T11:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.821820 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.821893 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.821913 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.821938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.821979 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:24Z","lastTransitionTime":"2026-01-22T11:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.925075 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.925151 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.925181 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.925210 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:24 crc kubenswrapper[4874]: I0122 11:41:24.925232 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:24Z","lastTransitionTime":"2026-01-22T11:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.028793 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.028848 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.028863 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.028884 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.028898 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.131547 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.131597 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.131613 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.131636 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.131652 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.235386 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.235566 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.235594 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.235624 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.235648 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.338326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.338808 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.338961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.339129 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.339255 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.442735 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.442802 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.442820 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.442844 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.442861 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.545348 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.545608 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.545686 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.545800 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.545878 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.648269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.648374 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.648437 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.648472 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.648496 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.698203 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:37:23.510430792 +0000 UTC Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.715577 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:25 crc kubenswrapper[4874]: E0122 11:41:25.716107 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.752055 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.752125 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.752148 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.752178 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.752196 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.855437 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.855502 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.855519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.855542 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.855559 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.958608 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.958653 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.958665 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.958682 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:25 crc kubenswrapper[4874]: I0122 11:41:25.958694 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:25Z","lastTransitionTime":"2026-01-22T11:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.064758 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.065379 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.065543 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.065589 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.065634 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.168956 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.169001 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.169018 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.169039 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.169055 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.272251 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.272290 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.272298 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.272312 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.272322 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.375518 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.375853 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.375998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.376215 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.376486 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.479603 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.479635 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.479642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.479654 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.479662 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.582041 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.582089 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.582105 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.582130 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.582147 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.684448 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.684495 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.684507 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.684523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.684845 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.699134 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:16:43.750801151 +0000 UTC Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.715900 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.716053 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:26 crc kubenswrapper[4874]: E0122 11:41:26.716054 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:26 crc kubenswrapper[4874]: E0122 11:41:26.716126 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.716189 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:26 crc kubenswrapper[4874]: E0122 11:41:26.716241 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.732263 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.757150 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.775598 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.787290 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.787574 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.787775 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.787946 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.788110 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.792034 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.805835 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.821200 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.835312 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.857057 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.871613 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d542969f-3655-4a4e-8a4d-238cff44f86e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.885789 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.891935 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.892138 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.892268 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.892450 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.892639 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.898588 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.912738 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.924219 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.934706 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.946839 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.959105 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.971461 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.982548 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:26Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.995702 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.995739 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.995748 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.995762 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:26 crc kubenswrapper[4874]: I0122 11:41:26.995772 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:26Z","lastTransitionTime":"2026-01-22T11:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.097344 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.097375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.097384 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.097420 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.097435 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:27Z","lastTransitionTime":"2026-01-22T11:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.199084 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.199489 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.199654 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.199777 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.199887 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:27Z","lastTransitionTime":"2026-01-22T11:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.302928 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.302971 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.302984 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.303001 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.303013 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:27Z","lastTransitionTime":"2026-01-22T11:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.405387 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.405435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.405445 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.405459 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.405470 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:27Z","lastTransitionTime":"2026-01-22T11:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.507810 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.507849 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.507858 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.507872 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.507882 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:27Z","lastTransitionTime":"2026-01-22T11:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.610838 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.610878 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.610889 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.610906 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.610917 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:27Z","lastTransitionTime":"2026-01-22T11:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.699721 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:11:26.445714087 +0000 UTC Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.713669 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.713720 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.713732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.713751 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.713764 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:27Z","lastTransitionTime":"2026-01-22T11:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.715600 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:27 crc kubenswrapper[4874]: E0122 11:41:27.715778 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.816765 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.816801 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.816813 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.816829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.816841 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:27Z","lastTransitionTime":"2026-01-22T11:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.921021 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.921090 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.921111 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.921133 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:27 crc kubenswrapper[4874]: I0122 11:41:27.921149 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:27Z","lastTransitionTime":"2026-01-22T11:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.030452 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.030522 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.030545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.030577 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.030600 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.133355 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.133462 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.133487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.133519 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.133543 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.237828 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.237869 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.237885 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.237900 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.237978 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.342060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.342123 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.342139 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.342164 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.342182 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.445066 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.445122 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.445142 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.445165 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.445181 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.550025 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.550074 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.550087 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.550108 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.550122 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.653544 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.653604 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.653622 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.653645 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.653661 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.701645 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:49:33.73075206 +0000 UTC Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.715380 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.715490 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.715492 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.715682 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.715821 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.716324 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.716953 4874 scope.go:117] "RemoveContainer" containerID="fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337" Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.717381 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.756533 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.756600 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.756627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.756655 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.756676 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.804270 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.804343 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.804361 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.804383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.804433 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.825241 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:28Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.830717 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.830766 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.830779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.830798 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.830810 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.848437 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:28Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.853488 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.853564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.853589 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.853622 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.853644 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.873850 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:28Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.878953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.879019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.879059 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.879094 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.879117 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.900451 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:28Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.905664 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.905727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.905749 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.905778 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.905798 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.928093 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:28Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:28 crc kubenswrapper[4874]: E0122 11:41:28.928315 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.930470 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.930546 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.930565 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.930994 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:28 crc kubenswrapper[4874]: I0122 11:41:28.931055 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:28Z","lastTransitionTime":"2026-01-22T11:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.033969 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.034008 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.034021 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.034040 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.034054 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.137466 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.137503 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.137518 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.137536 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.137548 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.240441 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.240474 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.240487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.240500 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.240509 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.343525 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.343564 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.343576 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.343592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.343603 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.445948 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.445996 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.446009 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.446028 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.446041 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.549320 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.549373 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.549385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.549434 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.549449 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.652261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.652343 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.652364 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.652389 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.652439 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.752692 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:06:57.06985107 +0000 UTC Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.753003 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:29 crc kubenswrapper[4874]: E0122 11:41:29.753542 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.754716 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.754764 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.754787 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.754816 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.754839 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.857802 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.857864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.857882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.857907 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.857924 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.961530 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.961592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.961603 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.961617 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:29 crc kubenswrapper[4874]: I0122 11:41:29.961627 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:29Z","lastTransitionTime":"2026-01-22T11:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.065145 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.065177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.065185 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.065201 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.065210 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.168069 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.168110 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.168119 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.168133 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.168142 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.270441 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.270475 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.270483 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.270495 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.270504 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.372827 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.372869 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.372878 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.372894 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.372904 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.462326 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:30 crc kubenswrapper[4874]: E0122 11:41:30.462657 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:30 crc kubenswrapper[4874]: E0122 11:41:30.462820 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs podName:5451fbab-ebad-42e7-bb80-f94bad10d571 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:02.462785416 +0000 UTC m=+96.307856516 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs") pod "network-metrics-daemon-lr2vd" (UID: "5451fbab-ebad-42e7-bb80-f94bad10d571") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.475958 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.476017 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.476034 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.476060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.476077 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.578777 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.578819 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.578830 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.578847 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.578856 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.681133 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.681166 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.681175 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.681189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.681199 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.716017 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.716027 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:30 crc kubenswrapper[4874]: E0122 11:41:30.716165 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.716192 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:30 crc kubenswrapper[4874]: E0122 11:41:30.716280 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:30 crc kubenswrapper[4874]: E0122 11:41:30.716412 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.753076 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:21:12.169443812 +0000 UTC Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.783384 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.783468 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.783486 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.783509 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.783527 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.886534 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.886579 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.886592 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.886609 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.886621 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.989641 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.989680 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.989689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.989703 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:30 crc kubenswrapper[4874]: I0122 11:41:30.989712 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:30Z","lastTransitionTime":"2026-01-22T11:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.092293 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.092336 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.092364 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.092383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.092418 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:31Z","lastTransitionTime":"2026-01-22T11:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.194476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.194526 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.194535 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.194551 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.194561 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:31Z","lastTransitionTime":"2026-01-22T11:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.296863 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.296931 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.296944 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.296994 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.297009 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:31Z","lastTransitionTime":"2026-01-22T11:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.399176 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.399246 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.399259 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.399277 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.399287 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:31Z","lastTransitionTime":"2026-01-22T11:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.501304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.501341 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.501350 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.501365 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.501374 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:31Z","lastTransitionTime":"2026-01-22T11:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.604411 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.604453 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.604463 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.604478 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.604489 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:31Z","lastTransitionTime":"2026-01-22T11:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.707469 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.707501 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.707509 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.707522 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.707533 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:31Z","lastTransitionTime":"2026-01-22T11:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.716032 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:31 crc kubenswrapper[4874]: E0122 11:41:31.716211 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.754207 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:51:58.545358631 +0000 UTC Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.810450 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.810496 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.810511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.810528 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.810541 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:31Z","lastTransitionTime":"2026-01-22T11:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.912895 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.912943 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.912954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.912970 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:31 crc kubenswrapper[4874]: I0122 11:41:31.912980 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:31Z","lastTransitionTime":"2026-01-22T11:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.016328 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.016444 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.016466 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.016495 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.016535 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.098030 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/0.log" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.098079 4874 generic.go:334] "Generic (PLEG): container finished" podID="977746b5-ac1b-4b6e-bdbc-ddd90225e68c" containerID="600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf" exitCode=1 Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.098120 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krrtc" event={"ID":"977746b5-ac1b-4b6e-bdbc-ddd90225e68c","Type":"ContainerDied","Data":"600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.098733 4874 scope.go:117] "RemoveContainer" containerID="600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.112887 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.120586 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.120631 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.120648 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.120671 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.120690 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.135807 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.154227 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.170583 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d542969f-3655-4a4e-8a4d-238cff44f86e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.184824 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.197059 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:31Z\\\",\\\"message\\\":\\\"2026-01-22T11:40:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90\\\\n2026-01-22T11:40:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90 to /host/opt/cni/bin/\\\\n2026-01-22T11:40:46Z [verbose] multus-daemon started\\\\n2026-01-22T11:40:46Z [verbose] Readiness Indicator file check\\\\n2026-01-22T11:41:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.215238 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.223419 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.223465 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.223478 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.223498 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.223513 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.228843 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.243656 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.257507 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.268454 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.279731 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.293853 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.324782 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.329149 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.329176 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.329184 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.329197 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.329206 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.340009 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.351666 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.361043 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.374853 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:32Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.431236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.431277 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.431286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.431299 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.431309 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.533082 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.533110 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.533119 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.533130 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.533139 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.635022 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.635062 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.635071 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.635086 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.635098 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.715287 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.715328 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:32 crc kubenswrapper[4874]: E0122 11:41:32.715445 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.715475 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:32 crc kubenswrapper[4874]: E0122 11:41:32.715575 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:32 crc kubenswrapper[4874]: E0122 11:41:32.715644 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.736651 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.736688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.736699 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.736714 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.736725 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.755021 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:10:53.973580311 +0000 UTC Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.839240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.839282 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.839294 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.839310 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.839322 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.942035 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.942066 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.942075 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.942089 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:32 crc kubenswrapper[4874]: I0122 11:41:32.942099 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:32Z","lastTransitionTime":"2026-01-22T11:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.044664 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.044710 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.044722 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.044737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.044748 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.104032 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/0.log" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.104087 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krrtc" event={"ID":"977746b5-ac1b-4b6e-bdbc-ddd90225e68c","Type":"ContainerStarted","Data":"cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.118876 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.131604 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.147551 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.147591 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.147601 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.147617 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.147628 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.148612 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.167603 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.181324 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.197632 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.208076 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.225783 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.251973 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.252015 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.252024 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.252042 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.252059 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.256898 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.282376 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.297137 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.309836 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.321940 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.333518 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:31Z\\\",\\\"message\\\":\\\"2026-01-22T11:40:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90\\\\n2026-01-22T11:40:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90 to /host/opt/cni/bin/\\\\n2026-01-22T11:40:46Z [verbose] multus-daemon started\\\\n2026-01-22T11:40:46Z [verbose] Readiness Indicator file check\\\\n2026-01-22T11:41:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.345780 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.354484 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.354516 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.354527 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.354541 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.354564 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.356793 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.369161 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.381154 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d542969f-3655-4a4e-8a4d-238cff44f86e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:33Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.456870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.456939 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.456961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.456990 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.457010 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.559196 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.559235 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.559248 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.559264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.559275 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.661590 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.661628 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.661639 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.661654 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.661664 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.715575 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:33 crc kubenswrapper[4874]: E0122 11:41:33.715799 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.755498 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:46:59.084904856 +0000 UTC Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.764503 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.764561 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.764605 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.764624 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.764635 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.866838 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.866870 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.866881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.866898 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.866910 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.969347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.969385 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.969410 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.969425 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:33 crc kubenswrapper[4874]: I0122 11:41:33.969436 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:33Z","lastTransitionTime":"2026-01-22T11:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.071646 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.071699 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.071721 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.071749 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.071775 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.174215 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.174292 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.174319 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.174350 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.174373 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.277229 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.277308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.277329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.277353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.277375 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.380281 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.380329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.380340 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.380360 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.380369 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.483039 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.483074 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.483082 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.483096 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.483105 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.585692 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.585730 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.585739 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.585752 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.585760 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.688621 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.688715 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.688737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.688772 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.688796 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.715435 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.715540 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:34 crc kubenswrapper[4874]: E0122 11:41:34.715614 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.715728 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:34 crc kubenswrapper[4874]: E0122 11:41:34.715830 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:34 crc kubenswrapper[4874]: E0122 11:41:34.715965 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.755763 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:29:39.66697323 +0000 UTC Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.791262 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.791315 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.791331 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.791353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.791369 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.893844 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.893896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.893910 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.893950 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.893968 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.996148 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.996195 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.996209 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.996225 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:34 crc kubenswrapper[4874]: I0122 11:41:34.996237 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:34Z","lastTransitionTime":"2026-01-22T11:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.098303 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.098351 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.098366 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.098388 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.098428 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:35Z","lastTransitionTime":"2026-01-22T11:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.200915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.200970 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.200986 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.201006 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.201024 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:35Z","lastTransitionTime":"2026-01-22T11:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.302872 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.302919 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.302932 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.302951 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.302963 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:35Z","lastTransitionTime":"2026-01-22T11:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.405428 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.405485 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.405497 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.405512 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.405521 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:35Z","lastTransitionTime":"2026-01-22T11:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.507352 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.507408 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.507425 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.507440 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.507449 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:35Z","lastTransitionTime":"2026-01-22T11:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.608963 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.609005 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.609019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.609034 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.609045 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:35Z","lastTransitionTime":"2026-01-22T11:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.710997 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.711048 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.711064 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.711088 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.711103 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:35Z","lastTransitionTime":"2026-01-22T11:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.715293 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:35 crc kubenswrapper[4874]: E0122 11:41:35.715645 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.756133 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:47:21.635014387 +0000 UTC Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.813705 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.813750 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.813762 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.813779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.813791 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:35Z","lastTransitionTime":"2026-01-22T11:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.915682 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.915728 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.915740 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.915757 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:35 crc kubenswrapper[4874]: I0122 11:41:35.915767 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:35Z","lastTransitionTime":"2026-01-22T11:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.017848 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.017876 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.017887 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.017901 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.017912 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.120347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.120416 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.120435 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.120457 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.120472 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.223209 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.223243 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.223252 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.223269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.223278 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.325747 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.325790 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.325801 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.325819 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.325833 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.427978 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.428038 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.428048 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.428062 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.428071 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.531028 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.531064 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.531073 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.531087 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.531096 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.632975 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.633017 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.633028 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.633043 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.633055 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.715686 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.715727 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.715743 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:36 crc kubenswrapper[4874]: E0122 11:41:36.716109 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:36 crc kubenswrapper[4874]: E0122 11:41:36.716206 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:36 crc kubenswrapper[4874]: E0122 11:41:36.715968 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.726496 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.737904 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.737946 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.737956 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.737977 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.737990 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.747379 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.756757 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:38:42.088652766 +0000 UTC Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.762800 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.775650 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.784786 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.796628 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.809231 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.830931 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.839762 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.839805 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.839816 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.839835 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.839847 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.842057 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d542969f-3655-4a4e-8a4d-238cff44f86e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.853790 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.865639 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:31Z\\\",\\\"message\\\":\\\"2026-01-22T11:40:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90\\\\n2026-01-22T11:40:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90 to /host/opt/cni/bin/\\\\n2026-01-22T11:40:46Z [verbose] multus-daemon started\\\\n2026-01-22T11:40:46Z [verbose] Readiness Indicator file check\\\\n2026-01-22T11:41:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.878445 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.888487 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.899030 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.909832 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.921324 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.932355 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.941847 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.941886 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.941898 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.941915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.941926 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:36Z","lastTransitionTime":"2026-01-22T11:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:36 crc kubenswrapper[4874]: I0122 11:41:36.942727 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:36Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.043597 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.043627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.043636 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.043650 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.043662 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.145936 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.146011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.146022 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.146036 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.146045 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.248353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.248415 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.248427 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.248441 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.248453 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.365992 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.366025 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.366033 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.366046 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.366054 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.468489 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.468545 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.468560 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.468582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.468601 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.571641 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.571696 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.571708 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.571727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.571742 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.675125 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.675169 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.675180 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.675195 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.675209 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.715533 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:37 crc kubenswrapper[4874]: E0122 11:41:37.715642 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.757916 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:19:36.955140537 +0000 UTC Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.777633 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.777701 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.777723 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.777754 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.777777 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.879915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.880004 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.880027 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.880053 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.880072 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.981737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.981767 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.981776 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.981788 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:37 crc kubenswrapper[4874]: I0122 11:41:37.981796 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:37Z","lastTransitionTime":"2026-01-22T11:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.084197 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.084224 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.084232 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.084244 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.084253 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:38Z","lastTransitionTime":"2026-01-22T11:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.186184 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.186222 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.186233 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.186249 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.186260 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:38Z","lastTransitionTime":"2026-01-22T11:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.288192 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.288228 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.288239 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.288256 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.288268 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:38Z","lastTransitionTime":"2026-01-22T11:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.391303 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.391344 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.391352 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.391366 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.391375 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:38Z","lastTransitionTime":"2026-01-22T11:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.493508 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.493741 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.493843 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.493928 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.493996 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:38Z","lastTransitionTime":"2026-01-22T11:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.596511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.596889 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.597030 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.597139 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.597227 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:38Z","lastTransitionTime":"2026-01-22T11:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.699863 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.699931 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.699943 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.699959 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.699969 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:38Z","lastTransitionTime":"2026-01-22T11:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.715494 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.715543 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:38 crc kubenswrapper[4874]: E0122 11:41:38.715601 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:38 crc kubenswrapper[4874]: E0122 11:41:38.715691 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.715778 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:38 crc kubenswrapper[4874]: E0122 11:41:38.715836 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.758833 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:15:36.769117105 +0000 UTC Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.803114 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.803153 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.803161 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.803177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.803191 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:38Z","lastTransitionTime":"2026-01-22T11:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.905671 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.905701 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.905711 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.905743 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:38 crc kubenswrapper[4874]: I0122 11:41:38.905753 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:38Z","lastTransitionTime":"2026-01-22T11:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.008307 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.008420 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.008609 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.008636 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.008649 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.057455 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.057487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.057496 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.057509 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.057520 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: E0122 11:41:39.070641 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:39Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.074326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.074367 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.074378 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.074408 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.074422 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: E0122 11:41:39.089688 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:39Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.092970 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.093014 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.093030 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.093051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.093066 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: E0122 11:41:39.106764 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:39Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.109737 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.109829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.109908 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.109980 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.110067 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: E0122 11:41:39.123156 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:39Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.126063 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.126092 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.126101 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.126114 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.126138 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: E0122 11:41:39.138849 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:39Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:39 crc kubenswrapper[4874]: E0122 11:41:39.138993 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.140231 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.140261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.140271 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.140286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.140296 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.242639 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.242666 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.242675 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.242689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.242699 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.344306 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.344331 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.344338 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.344351 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.344360 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.446381 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.446426 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.446433 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.446449 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.446459 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.548906 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.548944 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.548954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.548972 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.548983 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.651670 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.651716 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.651727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.651741 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.651752 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.715266 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:39 crc kubenswrapper[4874]: E0122 11:41:39.715509 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.716085 4874 scope.go:117] "RemoveContainer" containerID="fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.727319 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.753193 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.753229 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.753238 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.753252 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.753262 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.759804 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:17:52.16725777 +0000 UTC Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.855515 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.855842 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.855851 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.855864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.855874 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.958567 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.958621 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.958630 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.958647 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:39 crc kubenswrapper[4874]: I0122 11:41:39.958656 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:39Z","lastTransitionTime":"2026-01-22T11:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.060650 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.060681 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.060689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.060701 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.060710 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.127210 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/2.log" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.136719 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.137292 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.152727 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.163236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.163287 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.163304 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.163325 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.163338 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.175967 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.187325 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.198489 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.207746 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.220894 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.236134 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.260755 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:41:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.265331 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.265367 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.265380 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.265411 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.265424 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.274021 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb774505-642d-49da-a7c9-20fc8991d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ca3abbae7b47aa2ee502ed6cc36a325843a9f44e1c7881ba5a142bd13dd1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.294710 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d542969f-3655-4a4e-8a4d-238cff44f86e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.311000 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.322642 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:31Z\\\",\\\"message\\\":\\\"2026-01-22T11:40:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90\\\\n2026-01-22T11:40:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90 to /host/opt/cni/bin/\\\\n2026-01-22T11:40:46Z [verbose] multus-daemon started\\\\n2026-01-22T11:40:46Z [verbose] Readiness Indicator file check\\\\n2026-01-22T11:41:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.337839 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.347314 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.356667 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.367165 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.367200 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.367211 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.367227 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.367240 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.367450 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.378180 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.387481 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.396209 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:40Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.469653 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.469689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.469702 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.469718 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.469731 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.572439 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.572484 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.572506 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.572520 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.572528 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.674268 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.674308 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.674321 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.674336 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.674348 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.715909 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.715955 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.715909 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:40 crc kubenswrapper[4874]: E0122 11:41:40.716046 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:40 crc kubenswrapper[4874]: E0122 11:41:40.716100 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:40 crc kubenswrapper[4874]: E0122 11:41:40.716142 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.759900 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:03:09.524152126 +0000 UTC Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.777050 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.777075 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.777083 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.777095 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.777103 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.879341 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.879373 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.879382 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.879424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.879433 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.981593 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.981656 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.981668 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.981685 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:40 crc kubenswrapper[4874]: I0122 11:41:40.981697 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:40Z","lastTransitionTime":"2026-01-22T11:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.083883 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.083911 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.083920 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.083932 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.083940 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:41Z","lastTransitionTime":"2026-01-22T11:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.140841 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/3.log" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.141253 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/2.log" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.143071 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" exitCode=1 Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.143103 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.143132 4874 scope.go:117] "RemoveContainer" containerID="fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.143731 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:41:41 crc kubenswrapper[4874]: E0122 11:41:41.143858 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.159698 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.175236 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.186452 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.191682 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.191708 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.191718 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.191732 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.191741 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:41Z","lastTransitionTime":"2026-01-22T11:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.203158 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.216980 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.242526 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.254691 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.265879 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.279730 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.294479 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.294554 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.294568 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.294880 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.294895 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:41Z","lastTransitionTime":"2026-01-22T11:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.296175 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.310222 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.327557 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa3afc6324d7be4bb8f25f93db31da4205d1cf4c8956bbee8fc391d2fe99d337\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:11Z\\\",\\\"message\\\":\\\"actory.go:160\\\\nI0122 11:41:11.510239 6499 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510274 6499 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 11:41:11.510315 6499 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 11:41:11.516659 6499 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 11:41:11.516689 6499 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0122 11:41:11.516719 6499 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0122 11:41:11.516760 6499 factory.go:656] Stopping watch factory\\\\nI0122 11:41:11.516776 6499 handler.go:208] Removed *v1.Node event handler 2\\\\nI0122 11:41:11.529962 6499 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0122 11:41:11.529996 6499 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0122 11:41:11.530061 6499 ovnkube.go:599] Stopped ovnkube\\\\nI0122 11:41:11.530100 6499 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0122 11:41:11.530194 6499 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:40Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 11:41:40.694245 6904 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0122 11:41:40.694231 6904 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.339224 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.348044 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb774505-642d-49da-a7c9-20fc8991d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ca3abbae7b47aa2ee502ed6cc36a325843a9f44e1c7881ba5a142bd13dd1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.357777 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d542969f-3655-4a4e-8a4d-238cff44f86e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.368004 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.377958 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:31Z\\\",\\\"message\\\":\\\"2026-01-22T11:40:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90\\\\n2026-01-22T11:40:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90 to /host/opt/cni/bin/\\\\n2026-01-22T11:40:46Z [verbose] multus-daemon started\\\\n2026-01-22T11:40:46Z [verbose] Readiness Indicator file check\\\\n2026-01-22T11:41:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.390427 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.396718 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.396752 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.396762 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.396775 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.396783 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:41Z","lastTransitionTime":"2026-01-22T11:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.399729 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:41Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.499829 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.499864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.499873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.499887 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.499903 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:41Z","lastTransitionTime":"2026-01-22T11:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.602083 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.602109 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.602118 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.602133 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.602143 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:41Z","lastTransitionTime":"2026-01-22T11:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.704779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.704858 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.704882 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.704911 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.704933 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:41Z","lastTransitionTime":"2026-01-22T11:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.715344 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:41 crc kubenswrapper[4874]: E0122 11:41:41.715560 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.760470 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:52:05.533615663 +0000 UTC Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.807984 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.808020 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.808033 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.808049 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.808062 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:41Z","lastTransitionTime":"2026-01-22T11:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.910951 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.910996 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.911010 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.911031 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:41 crc kubenswrapper[4874]: I0122 11:41:41.911046 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:41Z","lastTransitionTime":"2026-01-22T11:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.013539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.013572 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.013581 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.013595 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.013604 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.116220 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.116260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.116269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.116286 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.116296 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.148109 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/3.log" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.152998 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:41:42 crc kubenswrapper[4874]: E0122 11:41:42.153199 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.169149 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.183649 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.212889 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:40Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 11:41:40.694245 6904 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0122 11:41:40.694231 6904 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.218437 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.218476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.218491 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.218509 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.218523 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.223770 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb774505-642d-49da-a7c9-20fc8991d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ca3abbae7b47aa2ee502ed6cc36a325843a9f44e1c7881ba5a142bd13dd1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.234927 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d542969f-3655-4a4e-8a4d-238cff44f86e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.245035 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.261449 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:31Z\\\",\\\"message\\\":\\\"2026-01-22T11:40:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90\\\\n2026-01-22T11:40:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90 to /host/opt/cni/bin/\\\\n2026-01-22T11:40:46Z [verbose] multus-daemon started\\\\n2026-01-22T11:40:46Z [verbose] Readiness Indicator file check\\\\n2026-01-22T11:41:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.276589 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.289382 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.300137 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.311772 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.321090 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.321133 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.321146 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.321163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.321175 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.323662 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.334392 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.346910 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.361466 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.378007 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.391123 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.404140 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.449248 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.449289 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.449301 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.449326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.449339 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.453805 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:42Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.551852 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.551899 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.551912 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.551928 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.551939 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.654416 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.654451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.654460 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.654473 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.654482 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.715282 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.715318 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.715350 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:42 crc kubenswrapper[4874]: E0122 11:41:42.715450 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:42 crc kubenswrapper[4874]: E0122 11:41:42.715521 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:42 crc kubenswrapper[4874]: E0122 11:41:42.715601 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.757281 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.757317 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.757328 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.757343 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.757354 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.761067 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:17:29.065492057 +0000 UTC Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.860688 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.860729 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.860741 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.860757 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.860768 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.962831 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.962881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.962892 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.962909 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:42 crc kubenswrapper[4874]: I0122 11:41:42.962921 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:42Z","lastTransitionTime":"2026-01-22T11:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.065729 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.065787 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.065809 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.065837 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.065858 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.168583 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.168642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.168660 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.168689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.168713 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.271604 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.271661 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.271681 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.271707 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.271724 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.374816 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.374850 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.374871 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.374897 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.374914 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.478172 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.478633 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.478801 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.478978 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.479115 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.581230 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.581267 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.581276 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.581291 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.581332 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.684394 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.684825 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.684994 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.685145 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.685309 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.716142 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:43 crc kubenswrapper[4874]: E0122 11:41:43.716577 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.761766 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:45:36.839439613 +0000 UTC Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.789008 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.789350 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.789584 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.789861 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.790070 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.893667 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.893736 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.893756 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.893781 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.893801 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.996949 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.997002 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.997017 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.997043 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:43 crc kubenswrapper[4874]: I0122 11:41:43.997062 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:43Z","lastTransitionTime":"2026-01-22T11:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.099627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.099691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.099710 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.099736 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.099755 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:44Z","lastTransitionTime":"2026-01-22T11:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.204013 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.204116 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.204183 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.204223 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.204248 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:44Z","lastTransitionTime":"2026-01-22T11:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.307854 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.307953 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.307973 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.308001 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.308019 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:44Z","lastTransitionTime":"2026-01-22T11:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.410700 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.410765 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.410803 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.410838 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.410862 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:44Z","lastTransitionTime":"2026-01-22T11:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.514131 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.514200 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.514216 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.514242 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.514258 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:44Z","lastTransitionTime":"2026-01-22T11:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.617078 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.617155 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.617177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.617204 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.617225 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:44Z","lastTransitionTime":"2026-01-22T11:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.715925 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.716059 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.716175 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:44 crc kubenswrapper[4874]: E0122 11:41:44.716190 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:44 crc kubenswrapper[4874]: E0122 11:41:44.716363 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:44 crc kubenswrapper[4874]: E0122 11:41:44.716577 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.719962 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.719996 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.720004 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.720019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.720029 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:44Z","lastTransitionTime":"2026-01-22T11:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.762834 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:49:04.796117869 +0000 UTC Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.822565 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.822655 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.822672 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.822697 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.822717 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:44Z","lastTransitionTime":"2026-01-22T11:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.925706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.925768 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.925787 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.925817 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:44 crc kubenswrapper[4874]: I0122 11:41:44.925839 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:44Z","lastTransitionTime":"2026-01-22T11:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.028860 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.028920 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.028937 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.028964 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.028980 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.131574 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.131644 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.131655 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.131671 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.131683 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.234938 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.234986 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.235002 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.235024 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.235040 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.344851 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.344948 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.344976 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.345011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.345035 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.447731 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.447769 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.447780 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.447797 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.447807 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.550717 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.550793 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.550803 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.550825 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.550838 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.654299 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.654341 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.654352 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.654369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.654571 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.715072 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:45 crc kubenswrapper[4874]: E0122 11:41:45.715256 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.757366 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.757445 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.757459 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.757476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.757487 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.763646 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:12:48.991125852 +0000 UTC Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.860560 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.860619 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.860641 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.860665 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.860682 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.963995 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.964069 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.964094 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.964125 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:45 crc kubenswrapper[4874]: I0122 11:41:45.964147 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:45Z","lastTransitionTime":"2026-01-22T11:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.504350 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.504383 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.504409 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.504424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.504437 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:46Z","lastTransitionTime":"2026-01-22T11:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.606815 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.606864 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.606875 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.606893 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.606905 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:46Z","lastTransitionTime":"2026-01-22T11:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.710122 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.710178 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.710190 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.710209 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.710224 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:46Z","lastTransitionTime":"2026-01-22T11:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.715736 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:46 crc kubenswrapper[4874]: E0122 11:41:46.715877 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.716014 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.716022 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:46 crc kubenswrapper[4874]: E0122 11:41:46.716213 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:46 crc kubenswrapper[4874]: E0122 11:41:46.716370 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.733494 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.755694 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.763751 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:44:40.088664559 +0000 UTC Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.780013 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:40Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0122 11:41:40.694245 6904 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver-operator/metrics]} name:Service_openshift-apiserver-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.38:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0122 11:41:40.694231 6904 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:41:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9lv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6tmll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.792609 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q2rnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"561c457d-4767-4b66-a07a-c435b7c9f161\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e3f356aa5539627ad44785049bfc3d7ac69b53acb6b900b6e7f568ea776db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dn6zq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q2rnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.808200 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8470cb5-cfaf-4760-8c07-ce375052950f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://851de9bc28a46137d2c81498f2c5b5814139e518b3f0d2bf9e78a7928825f7c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abf0b02c44c270a461f55a331b0c381f4287009b626412019fc8b109a0e9c330\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t9smw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x5vd5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.813476 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.813532 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.813552 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.813579 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.813597 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:46Z","lastTransitionTime":"2026-01-22T11:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.827494 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb774505-642d-49da-a7c9-20fc8991d5bf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01ca3abbae7b47aa2ee502ed6cc36a325843a9f44e1c7881ba5a142bd13dd1b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b54087988f1e51ce17beb3055e35b3ff31a6aa3cc3d687a18a7f6afdf9505e4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.843051 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d542969f-3655-4a4e-8a4d-238cff44f86e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9feb2908cd58fbcf7ae2f0e4281b7c0ef1a68896ab514d9aa90f347f7346b479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f9d6d3c847805c81649e7524a51e0d1a261d3c75d10c90e9e2d6d6a0723ff76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b6dcbe2e50cd1aa3a3e2edbc5401888ebd6f99cbbf5329245bd4f61bf7db75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a089dbfb63c0a243eb60e931dbdef0c66dac617d8da395573030e5cb3c6a832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.857755 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.875434 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krrtc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"977746b5-ac1b-4b6e-bdbc-ddd90225e68c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T11:41:31Z\\\",\\\"message\\\":\\\"2026-01-22T11:40:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90\\\\n2026-01-22T11:40:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4dce4b60-5f3b-4a6c-a314-b1ffb3916b90 to /host/opt/cni/bin/\\\\n2026-01-22T11:40:46Z [verbose] multus-daemon started\\\\n2026-01-22T11:40:46Z [verbose] Readiness Indicator file check\\\\n2026-01-22T11:41:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:41:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssgjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krrtc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.891732 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0e516a3e013fb8d23955f3170640da33bd3342dd094539b0ce10e6282761907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b277eb15265649871e04b7326942a50935322dbf06b2980b50dca2c681950c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d81a42eb7434f8c47532aaa5fc067af6db074f1966ca316b5f906e4a5315819a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a89a876896ac19a81f506739c3169b672dd0f2359d3c11bf37500bd7d7b56f14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c1dea0ccef6db4e333abbfcdaf54e048a6860bb27c8863d4282424ab0ef6f63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdf62f50193de58be29655476074eda616b705e5fe64dae97bfda7aca5a690e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617ad7b2c963ff9975f51646e9d86516c895a9bea66823867fc458e199eb1d94\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-txncl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pdb2m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.907588 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34389610bb49cff50f1cc1d4eecd26651272b1a0d761d312ab63dc10dcec5176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.916584 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.916627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.916638 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.916655 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.916666 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:46Z","lastTransitionTime":"2026-01-22T11:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.921207 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.933774 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-prbck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5ca785e-1db4-4e08-9ad0-66158728b48a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aebd343dfc49cbbea157cbfaed8299a4a619ef02f521f13f073887c989b0295f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7628z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-prbck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.943713 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c9653f9-cd5b-4b7a-8056-80ae8235d039\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac071038417ac8cf2cbb024f65f2d0c0177a34889700e0f690d5013f186236b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4prkg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.964943 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:46 crc kubenswrapper[4874]: I0122 11:41:46.997382 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:46Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.011464 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.020260 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.020320 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.020334 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.020357 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.020370 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.028055 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.043633 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:47Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.122820 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.122868 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.122880 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.122896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.122907 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.224767 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.224811 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.224820 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.224833 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.224841 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.327997 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.328050 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.328067 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.328085 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.328096 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.431297 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.431364 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.431378 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.431419 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.431435 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.534661 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.534714 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.534724 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.534744 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.534755 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.637990 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.638030 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.638040 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.638057 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.638075 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.715471 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:47 crc kubenswrapper[4874]: E0122 11:41:47.715674 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.740956 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.740998 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.741009 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.741024 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.741037 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.764476 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:29:59.780441503 +0000 UTC Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.843892 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.843943 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.843957 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.843978 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.843999 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.946788 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.946814 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.946822 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.946836 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:47 crc kubenswrapper[4874]: I0122 11:41:47.946846 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:47Z","lastTransitionTime":"2026-01-22T11:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.050228 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.050297 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.050319 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.050347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.050366 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.152980 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.153022 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.153032 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.153050 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.153061 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.254987 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.255038 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.255051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.255068 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.255081 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.357705 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.357763 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.357775 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.357794 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.357812 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.460180 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.460255 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.460265 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.460284 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.460293 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.562360 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.562417 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.562431 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.562452 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.562464 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.665594 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.665625 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.665636 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.665652 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.665664 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.715926 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.716069 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.716084 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.716087 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.716264 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.716383 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.730524 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.730643 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.730669 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.730686 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.730740 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:52.730714819 +0000 UTC m=+146.575785919 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.730796 4874 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.730851 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.730796 4874 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.730872 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.730885 4874 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.730855 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:52.730837923 +0000 UTC m=+146.575908993 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.730941 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:52.730929776 +0000 UTC m=+146.576000946 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.730965 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:52.730958237 +0000 UTC m=+146.576029307 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.764854 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:18:41.8273358 +0000 UTC Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.767559 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.767585 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.767596 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.767612 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.767623 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.832144 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.832290 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.832308 4874 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.832320 4874 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:48 crc kubenswrapper[4874]: E0122 11:41:48.832372 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:52.832356286 +0000 UTC m=+146.677427356 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.870510 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.870589 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.870614 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.870644 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.870666 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.973582 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.973643 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.973664 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.973691 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:48 crc kubenswrapper[4874]: I0122 11:41:48.973711 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:48Z","lastTransitionTime":"2026-01-22T11:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.080265 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.080312 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.080323 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.080340 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.080352 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.167608 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.167801 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.167830 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.167904 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.167927 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: E0122 11:41:49.188627 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.193125 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.193177 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.193195 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.193218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.193234 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: E0122 11:41:49.216511 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.222264 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.222292 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.222303 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.222334 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.222346 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: E0122 11:41:49.240804 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.244937 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.244981 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.244992 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.245010 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.245023 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: E0122 11:41:49.262701 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.267203 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.267227 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.267235 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.267247 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.267257 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: E0122 11:41:49.281873 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"770ef4c0-49b6-4adf-aa62-b643a71c762c\\\",\\\"systemUUID\\\":\\\"400a411a-8387-4bfb-bbce-2d30a7ad1d2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:49Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:49 crc kubenswrapper[4874]: E0122 11:41:49.282052 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.283906 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.283946 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.283961 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.283977 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.283990 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.387263 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.387321 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.387337 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.387354 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.387366 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.490027 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.490085 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.490095 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.490110 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.490121 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.593004 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.593061 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.593079 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.593102 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.593119 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.695621 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.695657 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.695667 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.695683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.695692 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.716121 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:49 crc kubenswrapper[4874]: E0122 11:41:49.716255 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.765263 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:21:51.232342252 +0000 UTC Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.798568 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.798607 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.798622 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.798641 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.798655 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.902051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.902122 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.902154 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.902182 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:49 crc kubenswrapper[4874]: I0122 11:41:49.902203 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:49Z","lastTransitionTime":"2026-01-22T11:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.004261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.004298 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.004309 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.004323 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.004332 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.106733 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.106809 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.106830 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.106866 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.106888 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.209988 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.210032 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.210045 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.210062 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.210073 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.313683 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.313779 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.313795 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.313818 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.313833 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.416881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.416929 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.416945 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.416965 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.416979 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.518785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.518839 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.518855 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.518877 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.518894 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.621386 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.621462 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.621473 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.621487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.621496 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.715671 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.715778 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.715853 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:50 crc kubenswrapper[4874]: E0122 11:41:50.716052 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:50 crc kubenswrapper[4874]: E0122 11:41:50.716245 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:50 crc kubenswrapper[4874]: E0122 11:41:50.716373 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.724065 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.724114 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.724128 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.724155 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.724172 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.765930 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:19:26.525062654 +0000 UTC Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.827101 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.827142 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.827151 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.827169 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.827180 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.930075 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.930143 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.930165 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.930189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:50 crc kubenswrapper[4874]: I0122 11:41:50.930207 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:50Z","lastTransitionTime":"2026-01-22T11:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.033067 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.033126 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.033145 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.033168 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.033185 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.136379 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.136487 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.136508 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.136531 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.136548 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.238647 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.238696 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.238706 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.238721 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.238730 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.341221 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.341571 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.341689 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.341782 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.341873 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.444718 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.444783 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.444808 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.444838 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.444860 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.547016 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.547315 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.547525 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.547669 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.547833 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.650358 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.651172 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.651330 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.651529 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.651675 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.715124 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:51 crc kubenswrapper[4874]: E0122 11:41:51.715302 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.754657 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.754959 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.755137 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.755300 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.755464 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.766122 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:45:30.990893467 +0000 UTC Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.858045 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.858298 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.858513 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.858660 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.858790 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.962275 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.962340 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.962362 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.962386 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:51 crc kubenswrapper[4874]: I0122 11:41:51.962437 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:51Z","lastTransitionTime":"2026-01-22T11:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.065511 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.065883 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.066087 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.066295 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.066531 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.168965 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.169024 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.169043 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.169069 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.169088 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.271857 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.271904 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.271915 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.271929 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.271940 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.374022 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.374065 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.374075 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.374089 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.374099 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.476684 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.476734 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.476752 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.476789 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.476805 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.579112 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.579173 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.579194 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.579218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.579235 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.680989 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.681056 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.681079 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.681104 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.681122 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.715999 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.716078 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.716041 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:52 crc kubenswrapper[4874]: E0122 11:41:52.716201 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:52 crc kubenswrapper[4874]: E0122 11:41:52.716480 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:52 crc kubenswrapper[4874]: E0122 11:41:52.716467 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.717624 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:41:52 crc kubenswrapper[4874]: E0122 11:41:52.717872 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.766922 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:05:20.240315663 +0000 UTC Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.783834 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.783885 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.783898 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.783917 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.783929 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.886766 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.886842 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.886865 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.886896 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.886916 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.989068 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.989102 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.989112 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.989127 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:52 crc kubenswrapper[4874]: I0122 11:41:52.989135 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:52Z","lastTransitionTime":"2026-01-22T11:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.092187 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.092262 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.092285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.092312 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.092333 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:53Z","lastTransitionTime":"2026-01-22T11:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.195123 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.195163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.195175 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.195191 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.195204 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:53Z","lastTransitionTime":"2026-01-22T11:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.297276 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.297312 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.297326 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.297342 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.297353 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:53Z","lastTransitionTime":"2026-01-22T11:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.400117 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.400152 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.400161 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.400175 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.400184 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:53Z","lastTransitionTime":"2026-01-22T11:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.503295 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.503347 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.503365 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.503387 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.503428 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:53Z","lastTransitionTime":"2026-01-22T11:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.606997 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.607081 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.607108 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.607141 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.607174 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:53Z","lastTransitionTime":"2026-01-22T11:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.710627 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.710695 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.710718 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.710745 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.710765 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:53Z","lastTransitionTime":"2026-01-22T11:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.715179 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:53 crc kubenswrapper[4874]: E0122 11:41:53.715354 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.767660 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:22:40.104832518 +0000 UTC Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.813370 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.813445 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.813462 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.813484 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.813499 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:53Z","lastTransitionTime":"2026-01-22T11:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.917218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.917288 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.917311 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.917341 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:53 crc kubenswrapper[4874]: I0122 11:41:53.917363 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:53Z","lastTransitionTime":"2026-01-22T11:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.019736 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.019785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.019800 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.019823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.019844 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.123285 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.123353 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.123375 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.123440 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.123462 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.226560 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.226936 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.227018 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.227484 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.227552 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.330543 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.330593 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.330610 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.330632 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.330649 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.433344 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.433376 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.433384 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.433418 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.433428 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.536060 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.536096 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.536105 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.536120 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.536131 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.639178 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.639217 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.639229 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.639247 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.639258 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.716102 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.716140 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:54 crc kubenswrapper[4874]: E0122 11:41:54.716294 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.716658 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:54 crc kubenswrapper[4874]: E0122 11:41:54.716804 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:54 crc kubenswrapper[4874]: E0122 11:41:54.716833 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.741711 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.741747 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.741759 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.741776 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.741789 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.768612 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:44:37.820492994 +0000 UTC Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.844816 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.844884 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.844902 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.844927 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.844945 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.947746 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.947786 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.947797 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.947811 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:54 crc kubenswrapper[4874]: I0122 11:41:54.947824 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:54Z","lastTransitionTime":"2026-01-22T11:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.051142 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.051215 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.051240 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.051269 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.051290 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.153752 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.153814 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.153830 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.153851 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.153866 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.256091 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.256163 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.256189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.256252 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.256275 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.359572 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.359617 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.359629 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.359645 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.359656 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.463481 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.463513 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.463523 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.463539 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.463552 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.565781 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.565817 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.565826 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.565839 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.565848 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.667919 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.667949 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.667958 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.667973 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.667984 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.715963 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:55 crc kubenswrapper[4874]: E0122 11:41:55.716151 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.769073 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:11:27.785468592 +0000 UTC Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.770935 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.770962 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.770970 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.770983 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.770992 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.873023 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.873111 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.873134 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.873164 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.873186 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.975692 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.975768 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.975794 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.975823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:55 crc kubenswrapper[4874]: I0122 11:41:55.975845 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:55Z","lastTransitionTime":"2026-01-22T11:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.078598 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.078642 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.078663 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.078682 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.078696 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:56Z","lastTransitionTime":"2026-01-22T11:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.180676 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.180757 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.180785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.180828 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.180857 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:56Z","lastTransitionTime":"2026-01-22T11:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.283189 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.283236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.283252 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.283272 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.283287 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:56Z","lastTransitionTime":"2026-01-22T11:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.389210 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.389840 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.389860 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.389881 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.389894 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:56Z","lastTransitionTime":"2026-01-22T11:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.493215 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.493265 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.493274 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.493292 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.493303 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:56Z","lastTransitionTime":"2026-01-22T11:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.596350 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.596451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.596471 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.596499 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.596516 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:56Z","lastTransitionTime":"2026-01-22T11:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.699431 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.699481 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.699493 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.699509 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.699522 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:56Z","lastTransitionTime":"2026-01-22T11:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.716145 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.717259 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.717358 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:56 crc kubenswrapper[4874]: E0122 11:41:56.717481 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:56 crc kubenswrapper[4874]: E0122 11:41:56.717579 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:56 crc kubenswrapper[4874]: E0122 11:41:56.717727 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.732420 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d78bedba-59c5-4a8a-89c2-4414b15e80a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"lient-ca-file\\\\nI0122 11:40:44.444174 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0122 11:40:44.444187 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0122 11:40:44.444220 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0122 11:40:44.444256 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0122 11:40:44.444339 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769082028\\\\\\\\\\\\\\\" (2026-01-22 11:40:27 +0000 UTC to 2026-02-21 11:40:28 +0000 UTC (now=2026-01-22 11:40:44.444302798 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444371 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-701345747/tls.crt::/tmp/serving-cert-701345747/tls.key\\\\\\\"\\\\nI0122 11:40:44.444515 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769082039\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769082039\\\\\\\\\\\\\\\" (2026-01-22 10:40:38 +0000 UTC to 2027-01-22 10:40:38 +0000 UTC (now=2026-01-22 11:40:44.444487504 +0000 UTC))\\\\\\\"\\\\nI0122 11:40:44.444544 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0122 11:40:44.444567 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0122 11:40:44.444571 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0122 11:40:44.444598 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0122 11:40:44.447070 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.754576 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0047f17e-2ad9-4a43-84e3-9a40551de219\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a2f2d27202a2d969dccaf2873952a87458d6493e0349a5f9d57d34c01122496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5677adf9234c26630e4d123a298a3b5b78023b99e2440aedb6ba3d73947258b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4dc4e9e4e97f59e7746733489f35ede2039b0a79334dae080ffce466f69b5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd9d25eaf32a4b740ebdfa226946fff1c0e7117a837ccdf35db16226c33ef7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc3d32eccb2d3485d6d3cdda4602d0c52124a9e7358ce0f8545c0df8d3bc572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc3a57545c435ec01d831ca3e781602465e8888f68010adb602176ee78bcb78f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9991227fae7d5e5b98375d16ef781c350d7c884df8875c48a3d3811031241b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa8313d3c4ced0c9a00c8c60b930f8fd257455bbc79e228fff3c0735d60b6a90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.769750 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:04:40.474966684 +0000 UTC Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.769894 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f82d39b1-4b1a-416e-af98-12acfb504203\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb247eedc7bd6d1590960a8f923c7a967873820b48176afaa52cfa2a5d8f863a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1def1c12d8ce72f4f05db7ba4fb7322bf2df6de33fbec65dda064988256461a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e12b7c1415511ab55581943924cdd0e2fcd0dd85e6c2e820110095a843a7ff00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.781917 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.791754 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5451fbab-ebad-42e7-bb80-f94bad10d571\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmzdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:40:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lr2vd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.802785 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.802832 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.802842 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.802856 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.802865 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:56Z","lastTransitionTime":"2026-01-22T11:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.804077 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22f3995504d35f199ad3862c187f002d8c17f8700bead094aa251247f29b87ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.817791 4874 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:40:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea295b271c741c5d2ad965686077935d0b12cd4108544715b905fed81d65c453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ddc55a83ed3e6e3cb1c06641bd5523431e37d70f2fdb9c60f854d4b9d7e38b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:40:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T11:41:56Z is after 2025-08-24T17:21:41Z" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.871915 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.871890507 podStartE2EDuration="17.871890507s" podCreationTimestamp="2026-01-22 11:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:56.85749038 +0000 UTC m=+90.702561480" watchObservedRunningTime="2026-01-22 11:41:56.871890507 +0000 UTC m=+90.716961577" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.886857 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.886831581 podStartE2EDuration="41.886831581s" podCreationTimestamp="2026-01-22 11:41:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:56.872472816 +0000 UTC m=+90.717543906" watchObservedRunningTime="2026-01-22 11:41:56.886831581 +0000 UTC m=+90.731902661" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.899532 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-krrtc" podStartSLOduration=72.899513814 podStartE2EDuration="1m12.899513814s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:56.898965917 +0000 UTC m=+90.744036987" watchObservedRunningTime="2026-01-22 11:41:56.899513814 +0000 UTC m=+90.744584884" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.906458 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.906497 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.906508 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.906526 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.906536 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:56Z","lastTransitionTime":"2026-01-22T11:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.917263 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pdb2m" podStartSLOduration=72.917243008 podStartE2EDuration="1m12.917243008s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:56.917170105 +0000 UTC m=+90.762241185" watchObservedRunningTime="2026-01-22 11:41:56.917243008 +0000 UTC m=+90.762314078" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.942761 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-q2rnk" podStartSLOduration=72.942737526 podStartE2EDuration="1m12.942737526s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:56.942603653 +0000 UTC m=+90.787674743" watchObservedRunningTime="2026-01-22 11:41:56.942737526 +0000 UTC m=+90.787808596" Jan 22 11:41:56 crc kubenswrapper[4874]: I0122 11:41:56.964498 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x5vd5" podStartSLOduration=71.964476987 podStartE2EDuration="1m11.964476987s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:56.964133646 +0000 UTC m=+90.809204726" watchObservedRunningTime="2026-01-22 11:41:56.964476987 +0000 UTC m=+90.809548067" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.008713 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-prbck" podStartSLOduration=73.008652 podStartE2EDuration="1m13.008652s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:57.008449784 +0000 UTC m=+90.853520854" watchObservedRunningTime="2026-01-22 11:41:57.008652 +0000 UTC m=+90.853723070" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.009096 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.009129 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.009143 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.009166 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.009179 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.023602 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podStartSLOduration=73.023576084 podStartE2EDuration="1m13.023576084s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:57.022874972 +0000 UTC m=+90.867946042" watchObservedRunningTime="2026-01-22 11:41:57.023576084 +0000 UTC m=+90.868647154" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.111424 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.111474 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.111483 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.111499 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.111510 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.213635 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.213690 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.213705 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.213727 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.213745 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.316305 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.316356 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.316370 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.316391 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.316429 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.419003 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.419040 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.419051 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.419068 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.419078 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.521802 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.522094 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.522165 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.522242 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.522314 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.625562 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.625626 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.625649 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.625674 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.625692 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.715967 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:57 crc kubenswrapper[4874]: E0122 11:41:57.716170 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.727954 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.728003 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.728011 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.728026 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.728036 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.770148 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:11:22.252468638 +0000 UTC Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.830501 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.830563 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.830580 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.830607 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.830624 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.933823 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.933873 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.933889 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.933914 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:57 crc kubenswrapper[4874]: I0122 11:41:57.933933 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:57Z","lastTransitionTime":"2026-01-22T11:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.036898 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.037010 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.037036 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.037070 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.037097 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.139124 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.139170 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.139181 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.139197 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.139208 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.242261 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.242295 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.242307 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.242329 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.242340 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.345313 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.345379 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.345425 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.345451 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.345468 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.448294 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.448343 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.448352 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.448369 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.448380 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.550629 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.550685 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.550701 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.550720 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.550736 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.653166 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.653207 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.653221 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.653236 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.653247 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.716208 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.716270 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:41:58 crc kubenswrapper[4874]: E0122 11:41:58.716329 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.716208 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:41:58 crc kubenswrapper[4874]: E0122 11:41:58.716389 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:41:58 crc kubenswrapper[4874]: E0122 11:41:58.716689 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.755960 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.756019 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.756041 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.756065 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.756082 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.770380 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:01:29.590806248 +0000 UTC Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.859151 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.859202 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.859218 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.859239 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.859256 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.962125 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.962196 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.962210 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.962238 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:58 crc kubenswrapper[4874]: I0122 11:41:58.962261 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:58Z","lastTransitionTime":"2026-01-22T11:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.065661 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.065734 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.065755 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.065782 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.065799 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:59Z","lastTransitionTime":"2026-01-22T11:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.169439 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.169525 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.169546 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.169568 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.169588 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:59Z","lastTransitionTime":"2026-01-22T11:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.272599 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.273016 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.273221 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.273490 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.273695 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:59Z","lastTransitionTime":"2026-01-22T11:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.286317 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.286360 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.286377 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.286449 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.286476 4874 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:41:59Z","lastTransitionTime":"2026-01-22T11:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.345258 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw"] Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.345996 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.348078 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.348547 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.348922 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.349622 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.396306 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=73.39628937 podStartE2EDuration="1m13.39628937s" podCreationTimestamp="2026-01-22 11:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:59.39534681 +0000 UTC m=+93.240417870" watchObservedRunningTime="2026-01-22 11:41:59.39628937 +0000 UTC m=+93.241360440" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.396523 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=75.396518157 podStartE2EDuration="1m15.396518157s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:59.36698123 +0000 UTC m=+93.212052340" watchObservedRunningTime="2026-01-22 11:41:59.396518157 +0000 UTC m=+93.241589227" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.409464 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=67.409447228 podStartE2EDuration="1m7.409447228s" podCreationTimestamp="2026-01-22 11:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:41:59.409295093 +0000 UTC m=+93.254366173" watchObservedRunningTime="2026-01-22 11:41:59.409447228 +0000 UTC m=+93.254518298" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.445794 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85f182db-0073-40ce-ae9b-94bbbef42279-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.445857 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85f182db-0073-40ce-ae9b-94bbbef42279-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.445875 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85f182db-0073-40ce-ae9b-94bbbef42279-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.445907 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85f182db-0073-40ce-ae9b-94bbbef42279-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.445921 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85f182db-0073-40ce-ae9b-94bbbef42279-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.546473 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85f182db-0073-40ce-ae9b-94bbbef42279-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.546529 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85f182db-0073-40ce-ae9b-94bbbef42279-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.546576 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85f182db-0073-40ce-ae9b-94bbbef42279-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.546656 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85f182db-0073-40ce-ae9b-94bbbef42279-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.546684 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85f182db-0073-40ce-ae9b-94bbbef42279-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.546748 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85f182db-0073-40ce-ae9b-94bbbef42279-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.546767 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85f182db-0073-40ce-ae9b-94bbbef42279-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.547468 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85f182db-0073-40ce-ae9b-94bbbef42279-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.561189 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85f182db-0073-40ce-ae9b-94bbbef42279-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.565429 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85f182db-0073-40ce-ae9b-94bbbef42279-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qfxsw\" (UID: \"85f182db-0073-40ce-ae9b-94bbbef42279\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.670674 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" Jan 22 11:41:59 crc kubenswrapper[4874]: W0122 11:41:59.687472 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85f182db_0073_40ce_ae9b_94bbbef42279.slice/crio-6784cd319e5c3ab88125d0a600b509ee3650d8d47aea2791f95196f3f207a8ec WatchSource:0}: Error finding container 6784cd319e5c3ab88125d0a600b509ee3650d8d47aea2791f95196f3f207a8ec: Status 404 returned error can't find the container with id 6784cd319e5c3ab88125d0a600b509ee3650d8d47aea2791f95196f3f207a8ec Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.715511 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:41:59 crc kubenswrapper[4874]: E0122 11:41:59.715687 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.771045 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:00:35.117662431 +0000 UTC Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.771130 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 11:41:59 crc kubenswrapper[4874]: I0122 11:41:59.779139 4874 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 11:42:00 crc kubenswrapper[4874]: I0122 11:42:00.555161 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" event={"ID":"85f182db-0073-40ce-ae9b-94bbbef42279","Type":"ContainerStarted","Data":"e6966912417cfcda36c2d50d4aad5c911d1c1773b45ce73ea88d890e1bc29126"} Jan 22 11:42:00 crc kubenswrapper[4874]: I0122 11:42:00.555232 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" event={"ID":"85f182db-0073-40ce-ae9b-94bbbef42279","Type":"ContainerStarted","Data":"6784cd319e5c3ab88125d0a600b509ee3650d8d47aea2791f95196f3f207a8ec"} Jan 22 11:42:00 crc kubenswrapper[4874]: I0122 11:42:00.577188 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qfxsw" podStartSLOduration=76.577161009 podStartE2EDuration="1m16.577161009s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:00.57623817 +0000 UTC m=+94.421309270" watchObservedRunningTime="2026-01-22 11:42:00.577161009 +0000 UTC m=+94.422232109" Jan 22 11:42:00 crc kubenswrapper[4874]: I0122 11:42:00.715862 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:00 crc kubenswrapper[4874]: I0122 11:42:00.715968 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:00 crc kubenswrapper[4874]: E0122 11:42:00.716129 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:00 crc kubenswrapper[4874]: E0122 11:42:00.716361 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:00 crc kubenswrapper[4874]: I0122 11:42:00.717275 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:00 crc kubenswrapper[4874]: E0122 11:42:00.717514 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:01 crc kubenswrapper[4874]: I0122 11:42:01.715803 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:01 crc kubenswrapper[4874]: E0122 11:42:01.715959 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:02 crc kubenswrapper[4874]: I0122 11:42:02.480439 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:02 crc kubenswrapper[4874]: E0122 11:42:02.480716 4874 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:42:02 crc kubenswrapper[4874]: E0122 11:42:02.480972 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs podName:5451fbab-ebad-42e7-bb80-f94bad10d571 nodeName:}" failed. No retries permitted until 2026-01-22 11:43:06.480952295 +0000 UTC m=+160.326023365 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs") pod "network-metrics-daemon-lr2vd" (UID: "5451fbab-ebad-42e7-bb80-f94bad10d571") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:42:02 crc kubenswrapper[4874]: I0122 11:42:02.715799 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:02 crc kubenswrapper[4874]: E0122 11:42:02.716287 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:02 crc kubenswrapper[4874]: I0122 11:42:02.715894 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:02 crc kubenswrapper[4874]: E0122 11:42:02.717030 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:02 crc kubenswrapper[4874]: I0122 11:42:02.715885 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:02 crc kubenswrapper[4874]: E0122 11:42:02.717552 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:03 crc kubenswrapper[4874]: I0122 11:42:03.715316 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:03 crc kubenswrapper[4874]: E0122 11:42:03.715623 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:04 crc kubenswrapper[4874]: I0122 11:42:04.716151 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:04 crc kubenswrapper[4874]: I0122 11:42:04.716207 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:04 crc kubenswrapper[4874]: E0122 11:42:04.716272 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:04 crc kubenswrapper[4874]: E0122 11:42:04.716373 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:04 crc kubenswrapper[4874]: I0122 11:42:04.716559 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:04 crc kubenswrapper[4874]: E0122 11:42:04.716691 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:04 crc kubenswrapper[4874]: I0122 11:42:04.717695 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:42:04 crc kubenswrapper[4874]: E0122 11:42:04.717987 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:42:05 crc kubenswrapper[4874]: I0122 11:42:05.714980 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:05 crc kubenswrapper[4874]: E0122 11:42:05.715094 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:06 crc kubenswrapper[4874]: I0122 11:42:06.716213 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:06 crc kubenswrapper[4874]: I0122 11:42:06.716224 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:06 crc kubenswrapper[4874]: I0122 11:42:06.716294 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:06 crc kubenswrapper[4874]: E0122 11:42:06.717512 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:06 crc kubenswrapper[4874]: E0122 11:42:06.717735 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:06 crc kubenswrapper[4874]: E0122 11:42:06.717931 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:07 crc kubenswrapper[4874]: I0122 11:42:07.716042 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:07 crc kubenswrapper[4874]: E0122 11:42:07.716280 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:08 crc kubenswrapper[4874]: I0122 11:42:08.716047 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:08 crc kubenswrapper[4874]: I0122 11:42:08.716117 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:08 crc kubenswrapper[4874]: I0122 11:42:08.716055 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:08 crc kubenswrapper[4874]: E0122 11:42:08.716191 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:08 crc kubenswrapper[4874]: E0122 11:42:08.716303 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:08 crc kubenswrapper[4874]: E0122 11:42:08.716467 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:09 crc kubenswrapper[4874]: I0122 11:42:09.715608 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:09 crc kubenswrapper[4874]: E0122 11:42:09.715932 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:10 crc kubenswrapper[4874]: I0122 11:42:10.715856 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:10 crc kubenswrapper[4874]: I0122 11:42:10.715896 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:10 crc kubenswrapper[4874]: E0122 11:42:10.716017 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:10 crc kubenswrapper[4874]: I0122 11:42:10.715856 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:10 crc kubenswrapper[4874]: E0122 11:42:10.716114 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:10 crc kubenswrapper[4874]: E0122 11:42:10.716174 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:11 crc kubenswrapper[4874]: I0122 11:42:11.715175 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:11 crc kubenswrapper[4874]: E0122 11:42:11.715349 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:12 crc kubenswrapper[4874]: I0122 11:42:12.715498 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:12 crc kubenswrapper[4874]: E0122 11:42:12.715661 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:12 crc kubenswrapper[4874]: I0122 11:42:12.715857 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:12 crc kubenswrapper[4874]: I0122 11:42:12.715531 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:12 crc kubenswrapper[4874]: E0122 11:42:12.716020 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:12 crc kubenswrapper[4874]: E0122 11:42:12.716133 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:13 crc kubenswrapper[4874]: I0122 11:42:13.715725 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:13 crc kubenswrapper[4874]: E0122 11:42:13.715912 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:14 crc kubenswrapper[4874]: I0122 11:42:14.715314 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:14 crc kubenswrapper[4874]: I0122 11:42:14.715342 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:14 crc kubenswrapper[4874]: E0122 11:42:14.715515 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:14 crc kubenswrapper[4874]: I0122 11:42:14.715570 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:14 crc kubenswrapper[4874]: E0122 11:42:14.715754 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:14 crc kubenswrapper[4874]: E0122 11:42:14.715819 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:15 crc kubenswrapper[4874]: I0122 11:42:15.715877 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:15 crc kubenswrapper[4874]: E0122 11:42:15.716079 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:16 crc kubenswrapper[4874]: I0122 11:42:16.715491 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:16 crc kubenswrapper[4874]: I0122 11:42:16.715594 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:16 crc kubenswrapper[4874]: E0122 11:42:16.718381 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:16 crc kubenswrapper[4874]: I0122 11:42:16.718477 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:16 crc kubenswrapper[4874]: E0122 11:42:16.718743 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:16 crc kubenswrapper[4874]: E0122 11:42:16.718621 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:17 crc kubenswrapper[4874]: I0122 11:42:17.619666 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/1.log" Jan 22 11:42:17 crc kubenswrapper[4874]: I0122 11:42:17.620662 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/0.log" Jan 22 11:42:17 crc kubenswrapper[4874]: I0122 11:42:17.620708 4874 generic.go:334] "Generic (PLEG): container finished" podID="977746b5-ac1b-4b6e-bdbc-ddd90225e68c" containerID="cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8" exitCode=1 Jan 22 11:42:17 crc kubenswrapper[4874]: I0122 11:42:17.620776 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krrtc" event={"ID":"977746b5-ac1b-4b6e-bdbc-ddd90225e68c","Type":"ContainerDied","Data":"cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8"} Jan 22 11:42:17 crc kubenswrapper[4874]: I0122 11:42:17.620999 4874 scope.go:117] "RemoveContainer" containerID="600d18ed6bc497772f5a8a2f70412d8e845c43e6f54c9f30666f362ab94aacbf" Jan 22 11:42:17 crc kubenswrapper[4874]: I0122 11:42:17.622180 4874 scope.go:117] "RemoveContainer" containerID="cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8" Jan 22 11:42:17 crc kubenswrapper[4874]: E0122 11:42:17.622481 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-krrtc_openshift-multus(977746b5-ac1b-4b6e-bdbc-ddd90225e68c)\"" pod="openshift-multus/multus-krrtc" podUID="977746b5-ac1b-4b6e-bdbc-ddd90225e68c" Jan 22 11:42:17 crc kubenswrapper[4874]: I0122 11:42:17.716191 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:17 crc kubenswrapper[4874]: E0122 11:42:17.716483 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:18 crc kubenswrapper[4874]: I0122 11:42:18.624769 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/1.log" Jan 22 11:42:18 crc kubenswrapper[4874]: I0122 11:42:18.715999 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:18 crc kubenswrapper[4874]: I0122 11:42:18.716020 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:18 crc kubenswrapper[4874]: I0122 11:42:18.716136 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:18 crc kubenswrapper[4874]: E0122 11:42:18.716252 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:18 crc kubenswrapper[4874]: E0122 11:42:18.716442 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:18 crc kubenswrapper[4874]: E0122 11:42:18.716580 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:18 crc kubenswrapper[4874]: I0122 11:42:18.717307 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:42:18 crc kubenswrapper[4874]: E0122 11:42:18.717512 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6tmll_openshift-ovn-kubernetes(642d0ca0-2e0f-4b69-9484-a63d0a01f8a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" Jan 22 11:42:19 crc kubenswrapper[4874]: I0122 11:42:19.715536 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:19 crc kubenswrapper[4874]: E0122 11:42:19.716205 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:20 crc kubenswrapper[4874]: I0122 11:42:20.717473 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:20 crc kubenswrapper[4874]: I0122 11:42:20.717631 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:20 crc kubenswrapper[4874]: E0122 11:42:20.717652 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:20 crc kubenswrapper[4874]: I0122 11:42:20.717937 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:20 crc kubenswrapper[4874]: E0122 11:42:20.718111 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:20 crc kubenswrapper[4874]: E0122 11:42:20.717868 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:21 crc kubenswrapper[4874]: I0122 11:42:21.715524 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:21 crc kubenswrapper[4874]: E0122 11:42:21.715694 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:22 crc kubenswrapper[4874]: I0122 11:42:22.715665 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:22 crc kubenswrapper[4874]: E0122 11:42:22.715783 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:22 crc kubenswrapper[4874]: I0122 11:42:22.715766 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:22 crc kubenswrapper[4874]: I0122 11:42:22.715936 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:22 crc kubenswrapper[4874]: E0122 11:42:22.716136 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:22 crc kubenswrapper[4874]: E0122 11:42:22.716490 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:23 crc kubenswrapper[4874]: I0122 11:42:23.715841 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:23 crc kubenswrapper[4874]: E0122 11:42:23.716139 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:24 crc kubenswrapper[4874]: I0122 11:42:24.716091 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:24 crc kubenswrapper[4874]: I0122 11:42:24.716207 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:24 crc kubenswrapper[4874]: I0122 11:42:24.716091 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:24 crc kubenswrapper[4874]: E0122 11:42:24.716325 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:24 crc kubenswrapper[4874]: E0122 11:42:24.716451 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:24 crc kubenswrapper[4874]: E0122 11:42:24.716531 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:25 crc kubenswrapper[4874]: I0122 11:42:25.716106 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:25 crc kubenswrapper[4874]: E0122 11:42:25.716752 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:26 crc kubenswrapper[4874]: I0122 11:42:26.715871 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:26 crc kubenswrapper[4874]: I0122 11:42:26.715998 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:26 crc kubenswrapper[4874]: E0122 11:42:26.716102 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:26 crc kubenswrapper[4874]: I0122 11:42:26.716175 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:26 crc kubenswrapper[4874]: E0122 11:42:26.718434 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:26 crc kubenswrapper[4874]: E0122 11:42:26.718623 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:26 crc kubenswrapper[4874]: E0122 11:42:26.732648 4874 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 22 11:42:26 crc kubenswrapper[4874]: E0122 11:42:26.818091 4874 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 11:42:27 crc kubenswrapper[4874]: I0122 11:42:27.715978 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:27 crc kubenswrapper[4874]: E0122 11:42:27.716177 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:28 crc kubenswrapper[4874]: I0122 11:42:28.716082 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:28 crc kubenswrapper[4874]: E0122 11:42:28.716279 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:28 crc kubenswrapper[4874]: I0122 11:42:28.717652 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:28 crc kubenswrapper[4874]: E0122 11:42:28.717804 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:28 crc kubenswrapper[4874]: I0122 11:42:28.716042 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:28 crc kubenswrapper[4874]: E0122 11:42:28.719728 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:29 crc kubenswrapper[4874]: I0122 11:42:29.716016 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:29 crc kubenswrapper[4874]: E0122 11:42:29.716652 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:29 crc kubenswrapper[4874]: I0122 11:42:29.718108 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:42:30 crc kubenswrapper[4874]: I0122 11:42:30.624710 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lr2vd"] Jan 22 11:42:30 crc kubenswrapper[4874]: I0122 11:42:30.671315 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/3.log" Jan 22 11:42:30 crc kubenswrapper[4874]: I0122 11:42:30.673871 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerStarted","Data":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} Jan 22 11:42:30 crc kubenswrapper[4874]: I0122 11:42:30.673911 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:30 crc kubenswrapper[4874]: E0122 11:42:30.674028 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:30 crc kubenswrapper[4874]: I0122 11:42:30.674256 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:42:30 crc kubenswrapper[4874]: I0122 11:42:30.704874 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podStartSLOduration=106.704852279 podStartE2EDuration="1m46.704852279s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:30.704609281 +0000 UTC m=+124.549680371" watchObservedRunningTime="2026-01-22 11:42:30.704852279 +0000 UTC m=+124.549923349" Jan 22 11:42:30 crc kubenswrapper[4874]: I0122 11:42:30.715211 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:30 crc kubenswrapper[4874]: I0122 11:42:30.715257 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:30 crc kubenswrapper[4874]: I0122 11:42:30.715292 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:30 crc kubenswrapper[4874]: E0122 11:42:30.715343 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:30 crc kubenswrapper[4874]: E0122 11:42:30.715545 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:30 crc kubenswrapper[4874]: E0122 11:42:30.715750 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:31 crc kubenswrapper[4874]: E0122 11:42:31.820112 4874 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 11:42:32 crc kubenswrapper[4874]: I0122 11:42:32.715275 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:32 crc kubenswrapper[4874]: I0122 11:42:32.715343 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:32 crc kubenswrapper[4874]: E0122 11:42:32.715462 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:32 crc kubenswrapper[4874]: I0122 11:42:32.715275 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:32 crc kubenswrapper[4874]: I0122 11:42:32.715539 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:32 crc kubenswrapper[4874]: E0122 11:42:32.715774 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:32 crc kubenswrapper[4874]: E0122 11:42:32.716185 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:32 crc kubenswrapper[4874]: I0122 11:42:32.716323 4874 scope.go:117] "RemoveContainer" containerID="cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8" Jan 22 11:42:32 crc kubenswrapper[4874]: E0122 11:42:32.716363 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:33 crc kubenswrapper[4874]: I0122 11:42:33.687249 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/1.log" Jan 22 11:42:33 crc kubenswrapper[4874]: I0122 11:42:33.687337 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krrtc" event={"ID":"977746b5-ac1b-4b6e-bdbc-ddd90225e68c","Type":"ContainerStarted","Data":"55eeb9abd8c425711e374c107c22ec24d1741880327f226d7db5e06d67925630"} Jan 22 11:42:34 crc kubenswrapper[4874]: I0122 11:42:34.716239 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:34 crc kubenswrapper[4874]: I0122 11:42:34.716278 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:34 crc kubenswrapper[4874]: I0122 11:42:34.716369 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:34 crc kubenswrapper[4874]: E0122 11:42:34.716451 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:34 crc kubenswrapper[4874]: I0122 11:42:34.716547 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:34 crc kubenswrapper[4874]: E0122 11:42:34.717001 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:34 crc kubenswrapper[4874]: E0122 11:42:34.717274 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:34 crc kubenswrapper[4874]: E0122 11:42:34.717538 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:36 crc kubenswrapper[4874]: I0122 11:42:36.716029 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:36 crc kubenswrapper[4874]: I0122 11:42:36.716087 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:36 crc kubenswrapper[4874]: I0122 11:42:36.716177 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:36 crc kubenswrapper[4874]: I0122 11:42:36.716252 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:36 crc kubenswrapper[4874]: E0122 11:42:36.718075 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 11:42:36 crc kubenswrapper[4874]: E0122 11:42:36.718241 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 11:42:36 crc kubenswrapper[4874]: E0122 11:42:36.718587 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 11:42:36 crc kubenswrapper[4874]: E0122 11:42:36.718632 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lr2vd" podUID="5451fbab-ebad-42e7-bb80-f94bad10d571" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.716149 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.716215 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.716959 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.717307 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.718746 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.719530 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.719716 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.719893 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.720259 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 11:42:38 crc kubenswrapper[4874]: I0122 11:42:38.722111 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.498520 4874 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.544806 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swcnq"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.545216 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.547375 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.547804 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.548766 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.549823 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wmwcz"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.549883 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.550136 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.550897 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.551091 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-869fn"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.551435 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.552375 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.552609 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.552969 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.559126 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.566809 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.568257 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.570126 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.571207 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.597280 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.598818 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.599307 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.599515 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.599637 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.600502 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.600517 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.600832 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.600948 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.600966 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.601116 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.601247 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.601520 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.601635 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.602123 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.602203 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.603914 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.606620 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.607526 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mftbv"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.607875 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.608111 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.608197 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jgw4c"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.608248 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.608382 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.608554 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jgw4c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.609027 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.609538 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.609674 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.609889 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610355 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb7b4d4-0441-43c4-9596-4d38b369d661-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610390 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610432 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb7b4d4-0441-43c4-9596-4d38b369d661-service-ca-bundle\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610456 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-config\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610478 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610499 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9bec261f-fd1e-44f7-a402-cae34f722b6c-images\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610519 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca44490a-7fc8-478d-a6e2-670f49816b81-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610552 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca44490a-7fc8-478d-a6e2-670f49816b81-encryption-config\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610583 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610603 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5464a23-ec80-4717-bfe0-6efeab811853-serving-cert\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610624 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca44490a-7fc8-478d-a6e2-670f49816b81-etcd-client\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610645 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca44490a-7fc8-478d-a6e2-670f49816b81-audit-policies\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610672 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca44490a-7fc8-478d-a6e2-670f49816b81-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610694 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqws\" (UniqueName: \"kubernetes.io/projected/ca44490a-7fc8-478d-a6e2-670f49816b81-kube-api-access-vkqws\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610719 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bec261f-fd1e-44f7-a402-cae34f722b6c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610740 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-machine-approver-tls\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610766 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb7b4d4-0441-43c4-9596-4d38b369d661-config\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610786 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nm6v\" (UniqueName: \"kubernetes.io/projected/f5464a23-ec80-4717-bfe0-6efeab811853-kube-api-access-2nm6v\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610806 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bec261f-fd1e-44f7-a402-cae34f722b6c-config\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610828 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-auth-proxy-config\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610847 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-client-ca\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610868 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb7b4d4-0441-43c4-9596-4d38b369d661-serving-cert\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610889 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-config\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610909 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97ls\" (UniqueName: \"kubernetes.io/projected/9bec261f-fd1e-44f7-a402-cae34f722b6c-kube-api-access-b97ls\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610934 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca44490a-7fc8-478d-a6e2-670f49816b81-audit-dir\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610965 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca44490a-7fc8-478d-a6e2-670f49816b81-serving-cert\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.610995 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsxpl\" (UniqueName: \"kubernetes.io/projected/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-kube-api-access-tsxpl\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.611018 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nfcq\" (UniqueName: \"kubernetes.io/projected/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-kube-api-access-9nfcq\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.611040 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.611063 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7qdr\" (UniqueName: \"kubernetes.io/projected/5fb7b4d4-0441-43c4-9596-4d38b369d661-kube-api-access-x7qdr\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.611600 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.611874 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.612083 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.612095 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.613196 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.613414 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.613674 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.621843 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.621889 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.621897 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.621933 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.622199 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.624239 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.625019 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.625262 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.625439 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.625529 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.625565 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.627630 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.630485 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8j9tz"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.630882 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cqkbr"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.631293 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.631441 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.631639 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.632705 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.637609 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.637726 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.637847 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.638061 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.641175 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.642870 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.644009 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.644297 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.644537 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.644659 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.644779 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.644971 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.655922 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.656234 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.657360 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wws2s"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.659187 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.659891 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.660061 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.660193 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.660423 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.661292 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.661567 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.661851 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.678723 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.678886 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.679199 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.683077 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.683804 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.684144 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.684334 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.684597 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.685012 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.685205 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.685606 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.685639 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.685654 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.685735 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.686118 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.686287 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.686522 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5cczs"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.686924 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.690214 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.690656 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.690805 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.690899 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.691160 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.690904 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.692112 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.692316 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.692540 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.692647 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.693736 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.693867 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.694215 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.694310 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.694632 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swcnq"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.694671 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.695455 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.695842 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.696458 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.697252 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.697989 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.698170 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.699524 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.699667 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.699806 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.699920 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.701706 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.701996 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.702320 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.702355 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.702517 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.702572 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.703538 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.704428 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rx6nv"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.705173 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.705323 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jg4wj"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.705848 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.707378 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.707964 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.708493 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.708779 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.712019 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.712638 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.712905 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.713282 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.714253 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sjhjk"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.716385 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.717330 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca44490a-7fc8-478d-a6e2-670f49816b81-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.717386 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-dir\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.717450 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2xwq\" (UniqueName: \"kubernetes.io/projected/308c337e-1e28-4e34-9ccb-8ae546eee089-kube-api-access-j2xwq\") pod \"cluster-samples-operator-665b6dd947-qfcsk\" (UID: \"308c337e-1e28-4e34-9ccb-8ae546eee089\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.717532 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kg7v\" (UniqueName: \"kubernetes.io/projected/16747491-2b7d-4cb1-841d-61c6f366cf8a-kube-api-access-2kg7v\") pod \"openshift-config-operator-7777fb866f-gv9x5\" (UID: \"16747491-2b7d-4cb1-841d-61c6f366cf8a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.717574 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/308c337e-1e28-4e34-9ccb-8ae546eee089-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qfcsk\" (UID: \"308c337e-1e28-4e34-9ccb-8ae546eee089\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.717825 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca44490a-7fc8-478d-a6e2-670f49816b81-encryption-config\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.717880 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/288e6358-c74b-4597-8968-726a31365f82-audit-dir\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.717927 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9hxd\" (UniqueName: \"kubernetes.io/projected/68dcdc32-485b-435c-81cc-43be463998bb-kube-api-access-c9hxd\") pod \"openshift-apiserver-operator-796bbdcf4f-dnrhx\" (UID: \"68dcdc32-485b-435c-81cc-43be463998bb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718102 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-policies\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718156 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718193 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5464a23-ec80-4717-bfe0-6efeab811853-serving-cert\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718235 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-service-ca\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718288 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca44490a-7fc8-478d-a6e2-670f49816b81-etcd-client\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718321 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718350 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca44490a-7fc8-478d-a6e2-670f49816b81-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca44490a-7fc8-478d-a6e2-670f49816b81-audit-policies\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718699 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-console-config\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718740 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16747491-2b7d-4cb1-841d-61c6f366cf8a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gv9x5\" (UID: \"16747491-2b7d-4cb1-841d-61c6f366cf8a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718778 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-client-ca\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.718810 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9446a39-9776-4de8-9137-b8952d336419-service-ca-bundle\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719135 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca44490a-7fc8-478d-a6e2-670f49816b81-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719184 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719228 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719375 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh722\" (UniqueName: \"kubernetes.io/projected/304c66b8-6187-47ac-9c57-235c634eaae4-kube-api-access-sh722\") pod \"migrator-59844c95c7-rwcrv\" (UID: \"304c66b8-6187-47ac-9c57-235c634eaae4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719431 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/288e6358-c74b-4597-8968-726a31365f82-node-pullsecrets\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719471 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqws\" (UniqueName: \"kubernetes.io/projected/ca44490a-7fc8-478d-a6e2-670f49816b81-kube-api-access-vkqws\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719662 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2dad3db6-cddd-457d-8efa-908257ef7cc5-console-serving-cert\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719691 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9jtx\" (UniqueName: \"kubernetes.io/projected/c045a558-1990-432e-9965-e918f60aba14-kube-api-access-b9jtx\") pod \"machine-config-controller-84d6567774-dprzc\" (UID: \"c045a558-1990-432e-9965-e918f60aba14\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719722 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29d2\" (UniqueName: \"kubernetes.io/projected/65f71c2e-ab34-4d33-905f-609555dab78c-kube-api-access-m29d2\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719932 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj795\" (UniqueName: \"kubernetes.io/projected/3b9983ba-ed8d-4654-ba74-f25433aa7ee7-kube-api-access-gj795\") pod \"downloads-7954f5f757-jgw4c\" (UID: \"3b9983ba-ed8d-4654-ba74-f25433aa7ee7\") " pod="openshift-console/downloads-7954f5f757-jgw4c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.719984 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bec261f-fd1e-44f7-a402-cae34f722b6c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.720026 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-machine-approver-tls\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.720071 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2dad3db6-cddd-457d-8efa-908257ef7cc5-console-oauth-config\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.720116 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rprpc\" (UniqueName: \"kubernetes.io/projected/2dad3db6-cddd-457d-8efa-908257ef7cc5-kube-api-access-rprpc\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.720315 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.720350 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-image-import-ca\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.720383 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/288e6358-c74b-4597-8968-726a31365f82-serving-cert\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.720450 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16747491-2b7d-4cb1-841d-61c6f366cf8a-serving-cert\") pod \"openshift-config-operator-7777fb866f-gv9x5\" (UID: \"16747491-2b7d-4cb1-841d-61c6f366cf8a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.720497 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.722597 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c045a558-1990-432e-9965-e918f60aba14-proxy-tls\") pod \"machine-config-controller-84d6567774-dprzc\" (UID: \"c045a558-1990-432e-9965-e918f60aba14\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.722650 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65f71c2e-ab34-4d33-905f-609555dab78c-serving-cert\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.722670 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dm8p\" (UniqueName: \"kubernetes.io/projected/88c9b59f-1809-4252-a8fc-cad965848dc0-kube-api-access-8dm8p\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.723346 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca44490a-7fc8-478d-a6e2-670f49816b81-audit-policies\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.723383 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e9446a39-9776-4de8-9137-b8952d336419-stats-auth\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.723444 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.723469 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.723490 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmkdz\" (UniqueName: \"kubernetes.io/projected/288e6358-c74b-4597-8968-726a31365f82-kube-api-access-qmkdz\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.723508 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88c9b59f-1809-4252-a8fc-cad965848dc0-proxy-tls\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.723533 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bec261f-fd1e-44f7-a402-cae34f722b6c-config\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.723557 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68dcdc32-485b-435c-81cc-43be463998bb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dnrhx\" (UID: \"68dcdc32-485b-435c-81cc-43be463998bb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.723963 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.724135 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb7b4d4-0441-43c4-9596-4d38b369d661-config\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.724167 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nm6v\" (UniqueName: \"kubernetes.io/projected/f5464a23-ec80-4717-bfe0-6efeab811853-kube-api-access-2nm6v\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.724203 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.724228 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/88c9b59f-1809-4252-a8fc-cad965848dc0-images\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.724250 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhz9\" (UniqueName: \"kubernetes.io/projected/e9446a39-9776-4de8-9137-b8952d336419-kube-api-access-lvhz9\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.724276 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/288e6358-c74b-4597-8968-726a31365f82-encryption-config\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.724317 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bec261f-fd1e-44f7-a402-cae34f722b6c-config\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.724869 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb7b4d4-0441-43c4-9596-4d38b369d661-config\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.725255 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-auth-proxy-config\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.725290 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87cb481-5261-4028-99e2-cd57ff6b61e1-config\") pod \"kube-apiserver-operator-766d6c64bb-gsfnm\" (UID: \"f87cb481-5261-4028-99e2-cd57ff6b61e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.725315 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-client-ca\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.725353 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a67b9a8-ad8a-40e3-955c-53aed07a9140-config\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.725375 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726380 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726430 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9446a39-9776-4de8-9137-b8952d336419-metrics-certs\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726456 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb7b4d4-0441-43c4-9596-4d38b369d661-serving-cert\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726480 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-config\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726503 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b97ls\" (UniqueName: \"kubernetes.io/projected/9bec261f-fd1e-44f7-a402-cae34f722b6c-kube-api-access-b97ls\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726540 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca44490a-7fc8-478d-a6e2-670f49816b81-audit-dir\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726570 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-etcd-serving-ca\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726596 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca44490a-7fc8-478d-a6e2-670f49816b81-encryption-config\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726638 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88c9b59f-1809-4252-a8fc-cad965848dc0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726666 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca44490a-7fc8-478d-a6e2-670f49816b81-serving-cert\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726706 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68dcdc32-485b-435c-81cc-43be463998bb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dnrhx\" (UID: \"68dcdc32-485b-435c-81cc-43be463998bb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726859 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9gf8\" (UniqueName: \"kubernetes.io/projected/9fd4241f-b523-4d66-bcdb-c3bb691765c9-kube-api-access-h9gf8\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726894 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsxpl\" (UniqueName: \"kubernetes.io/projected/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-kube-api-access-tsxpl\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.726923 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nfcq\" (UniqueName: \"kubernetes.io/projected/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-kube-api-access-9nfcq\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.727013 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/288e6358-c74b-4597-8968-726a31365f82-etcd-client\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.727038 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e9446a39-9776-4de8-9137-b8952d336419-default-certificate\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.727662 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.727793 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-audit\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.727945 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-auth-proxy-config\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728046 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728026 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca44490a-7fc8-478d-a6e2-670f49816b81-audit-dir\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.727037 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca44490a-7fc8-478d-a6e2-670f49816b81-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728321 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7qdr\" (UniqueName: \"kubernetes.io/projected/5fb7b4d4-0441-43c4-9596-4d38b369d661-kube-api-access-x7qdr\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728430 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f87cb481-5261-4028-99e2-cd57ff6b61e1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gsfnm\" (UID: \"f87cb481-5261-4028-99e2-cd57ff6b61e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728546 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-machine-approver-tls\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728625 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb7b4d4-0441-43c4-9596-4d38b369d661-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728676 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-oauth-serving-cert\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728701 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c045a558-1990-432e-9965-e918f60aba14-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dprzc\" (UID: \"c045a558-1990-432e-9965-e918f60aba14\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728736 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw8xf\" (UniqueName: \"kubernetes.io/projected/8a67b9a8-ad8a-40e3-955c-53aed07a9140-kube-api-access-cw8xf\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.728757 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729023 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-client-ca\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729056 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-trusted-ca-bundle\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729226 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f87cb481-5261-4028-99e2-cd57ff6b61e1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gsfnm\" (UID: \"f87cb481-5261-4028-99e2-cd57ff6b61e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729484 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-config\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729538 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729626 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729701 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb7b4d4-0441-43c4-9596-4d38b369d661-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729740 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb7b4d4-0441-43c4-9596-4d38b369d661-service-ca-bundle\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729795 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-config\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729858 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-config\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729886 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a67b9a8-ad8a-40e3-955c-53aed07a9140-serving-cert\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.729928 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a67b9a8-ad8a-40e3-955c-53aed07a9140-trusted-ca\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.730182 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.730258 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9bec261f-fd1e-44f7-a402-cae34f722b6c-images\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.730294 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-config\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.730371 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb7b4d4-0441-43c4-9596-4d38b369d661-service-ca-bundle\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.730384 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-config\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.731192 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bec261f-fd1e-44f7-a402-cae34f722b6c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.731219 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9bec261f-fd1e-44f7-a402-cae34f722b6c-images\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.732495 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca44490a-7fc8-478d-a6e2-670f49816b81-serving-cert\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.732658 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.733108 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5464a23-ec80-4717-bfe0-6efeab811853-serving-cert\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.733122 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca44490a-7fc8-478d-a6e2-670f49816b81-etcd-client\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.734349 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.736839 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.745220 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb7b4d4-0441-43c4-9596-4d38b369d661-serving-cert\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.747430 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f64ps"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.747633 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.748091 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.748438 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dwcjx"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.748534 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.748536 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.749314 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.749589 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.749718 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.749893 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.750524 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6dvvn"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.750784 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.750855 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cfzn7"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.750942 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.751426 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.751813 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.752078 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.752316 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.752512 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.752596 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.752632 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-869fn"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.754232 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jgw4c"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.755499 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.756089 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.757821 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.757864 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.758876 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8j9tz"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.760513 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mftbv"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.765424 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wws2s"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.767744 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.767621 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.769821 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.771234 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wmwcz"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.772816 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.773942 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.776353 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.777595 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jg4wj"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.778778 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.780827 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.782079 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.784427 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f64ps"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.785610 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.787077 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.787275 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.788349 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cqkbr"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.790332 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.791371 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rx6nv"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.793040 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.794034 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.795584 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sjhjk"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.796650 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.798195 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.799829 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.803049 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dwcjx"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.804429 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.805653 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.807750 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.812446 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6dvvn"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.812508 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l87tw"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.817083 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l87tw"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.817172 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.818946 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.820065 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kxvzk"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.820732 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.821772 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-s5xxd"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.822448 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s5xxd" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.823044 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kxvzk"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.824142 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s5xxd"] Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833288 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-trusted-ca-bundle\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833331 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f87cb481-5261-4028-99e2-cd57ff6b61e1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gsfnm\" (UID: \"f87cb481-5261-4028-99e2-cd57ff6b61e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833357 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833382 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-dir\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833425 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2xwq\" (UniqueName: \"kubernetes.io/projected/308c337e-1e28-4e34-9ccb-8ae546eee089-kube-api-access-j2xwq\") pod \"cluster-samples-operator-665b6dd947-qfcsk\" (UID: \"308c337e-1e28-4e34-9ccb-8ae546eee089\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833454 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4370babd-21ea-4c7e-81b9-cdd611127094-etcd-service-ca\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833477 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9bvp\" (UniqueName: \"kubernetes.io/projected/4370babd-21ea-4c7e-81b9-cdd611127094-kube-api-access-d9bvp\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833503 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kg7v\" (UniqueName: \"kubernetes.io/projected/16747491-2b7d-4cb1-841d-61c6f366cf8a-kube-api-access-2kg7v\") pod \"openshift-config-operator-7777fb866f-gv9x5\" (UID: \"16747491-2b7d-4cb1-841d-61c6f366cf8a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833529 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/308c337e-1e28-4e34-9ccb-8ae546eee089-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qfcsk\" (UID: \"308c337e-1e28-4e34-9ccb-8ae546eee089\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833554 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/288e6358-c74b-4597-8968-726a31365f82-audit-dir\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833578 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f745af09-93ed-4c94-8411-f14b0aaaf1cf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vsxx4\" (UID: \"f745af09-93ed-4c94-8411-f14b0aaaf1cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833606 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-policies\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833633 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4370babd-21ea-4c7e-81b9-cdd611127094-config\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833657 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmqf8\" (UniqueName: \"kubernetes.io/projected/8b5afeda-c180-4aa9-a831-a0840d495fa8-kube-api-access-hmqf8\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833682 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-client-ca\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833707 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833728 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4370babd-21ea-4c7e-81b9-cdd611127094-etcd-client\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833577 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-dir\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833753 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a15050a-7cbd-40f6-a656-a68293c0878a-secret-volume\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.833778 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1795c220-db74-434e-9111-917ff6d95077-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f64ps\" (UID: \"1795c220-db74-434e-9111-917ff6d95077\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834006 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2dad3db6-cddd-457d-8efa-908257ef7cc5-console-serving-cert\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834073 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9jtx\" (UniqueName: \"kubernetes.io/projected/c045a558-1990-432e-9965-e918f60aba14-kube-api-access-b9jtx\") pod \"machine-config-controller-84d6567774-dprzc\" (UID: \"c045a558-1990-432e-9965-e918f60aba14\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834142 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh722\" (UniqueName: \"kubernetes.io/projected/304c66b8-6187-47ac-9c57-235c634eaae4-kube-api-access-sh722\") pod \"migrator-59844c95c7-rwcrv\" (UID: \"304c66b8-6187-47ac-9c57-235c634eaae4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834182 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/288e6358-c74b-4597-8968-726a31365f82-node-pullsecrets\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834235 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rprpc\" (UniqueName: \"kubernetes.io/projected/2dad3db6-cddd-457d-8efa-908257ef7cc5-kube-api-access-rprpc\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834260 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj795\" (UniqueName: \"kubernetes.io/projected/3b9983ba-ed8d-4654-ba74-f25433aa7ee7-kube-api-access-gj795\") pod \"downloads-7954f5f757-jgw4c\" (UID: \"3b9983ba-ed8d-4654-ba74-f25433aa7ee7\") " pod="openshift-console/downloads-7954f5f757-jgw4c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834285 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-srv-cert\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834342 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-profile-collector-cert\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834368 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834391 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834429 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/288e6358-c74b-4597-8968-726a31365f82-serving-cert\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834452 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf6nt\" (UniqueName: \"kubernetes.io/projected/213d34f5-75cd-459c-9e56-2938fe5e3950-kube-api-access-lf6nt\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834495 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/288e6358-c74b-4597-8968-726a31365f82-node-pullsecrets\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834823 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-trusted-ca-bundle\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834910 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-policies\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.834968 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c045a558-1990-432e-9965-e918f60aba14-proxy-tls\") pod \"machine-config-controller-84d6567774-dprzc\" (UID: \"c045a558-1990-432e-9965-e918f60aba14\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835007 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65f71c2e-ab34-4d33-905f-609555dab78c-serving-cert\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835034 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dm8p\" (UniqueName: \"kubernetes.io/projected/88c9b59f-1809-4252-a8fc-cad965848dc0-kube-api-access-8dm8p\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835073 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d80367da-a8d4-4b76-9b33-5526aad85229-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s4fb2\" (UID: \"d80367da-a8d4-4b76-9b33-5526aad85229\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835092 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krt27\" (UniqueName: \"kubernetes.io/projected/1795c220-db74-434e-9111-917ff6d95077-kube-api-access-krt27\") pod \"multus-admission-controller-857f4d67dd-f64ps\" (UID: \"1795c220-db74-434e-9111-917ff6d95077\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835123 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835142 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835139 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/288e6358-c74b-4597-8968-726a31365f82-audit-dir\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835306 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmkdz\" (UniqueName: \"kubernetes.io/projected/288e6358-c74b-4597-8968-726a31365f82-kube-api-access-qmkdz\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835363 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88c9b59f-1809-4252-a8fc-cad965848dc0-proxy-tls\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.835819 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836097 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/88c9b59f-1809-4252-a8fc-cad965848dc0-images\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836153 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvhz9\" (UniqueName: \"kubernetes.io/projected/e9446a39-9776-4de8-9137-b8952d336419-kube-api-access-lvhz9\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836335 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836440 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/288e6358-c74b-4597-8968-726a31365f82-encryption-config\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836485 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836526 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9446a39-9776-4de8-9137-b8952d336419-metrics-certs\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836646 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjztm\" (UniqueName: \"kubernetes.io/projected/0a15050a-7cbd-40f6-a656-a68293c0878a-kube-api-access-fjztm\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836712 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-etcd-serving-ca\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836762 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88c9b59f-1809-4252-a8fc-cad965848dc0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836807 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9gf8\" (UniqueName: \"kubernetes.io/projected/9fd4241f-b523-4d66-bcdb-c3bb691765c9-kube-api-access-h9gf8\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.836844 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/88c9b59f-1809-4252-a8fc-cad965848dc0-images\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.837001 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.837093 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a15050a-7cbd-40f6-a656-a68293c0878a-config-volume\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.837145 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d80367da-a8d4-4b76-9b33-5526aad85229-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s4fb2\" (UID: \"d80367da-a8d4-4b76-9b33-5526aad85229\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.837185 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-audit\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.837347 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-serving-cert\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.837368 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.837412 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88c9b59f-1809-4252-a8fc-cad965848dc0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.837389 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-srv-cert\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.837770 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-etcd-serving-ca\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.838004 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-audit\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.838071 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-oauth-serving-cert\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.838447 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c045a558-1990-432e-9965-e918f60aba14-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dprzc\" (UID: \"c045a558-1990-432e-9965-e918f60aba14\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.838597 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.838826 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-config\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.838931 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d1224e-a43e-4dca-8ad0-239dc50b6d58-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmhx8\" (UID: \"a3d1224e-a43e-4dca-8ad0-239dc50b6d58\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839039 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7ccd8e-549f-4b23-bcec-4a1c14e10478-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vqrw2\" (UID: \"8d7ccd8e-549f-4b23-bcec-4a1c14e10478\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839144 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx7xc\" (UniqueName: \"kubernetes.io/projected/f745af09-93ed-4c94-8411-f14b0aaaf1cf-kube-api-access-lx7xc\") pod \"openshift-controller-manager-operator-756b6f6bc6-vsxx4\" (UID: \"f745af09-93ed-4c94-8411-f14b0aaaf1cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839247 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2dad3db6-cddd-457d-8efa-908257ef7cc5-console-serving-cert\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839175 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-oauth-serving-cert\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.838897 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/288e6358-c74b-4597-8968-726a31365f82-serving-cert\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839490 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839573 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-config\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a67b9a8-ad8a-40e3-955c-53aed07a9140-serving-cert\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839715 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839791 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a67b9a8-ad8a-40e3-955c-53aed07a9140-trusted-ca\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839869 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4370babd-21ea-4c7e-81b9-cdd611127094-etcd-ca\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839944 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-config\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840017 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d80367da-a8d4-4b76-9b33-5526aad85229-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s4fb2\" (UID: \"d80367da-a8d4-4b76-9b33-5526aad85229\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840096 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7t6g\" (UniqueName: \"kubernetes.io/projected/a3d1224e-a43e-4dca-8ad0-239dc50b6d58-kube-api-access-p7t6g\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmhx8\" (UID: \"a3d1224e-a43e-4dca-8ad0-239dc50b6d58\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840191 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-service-ca\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.839981 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/308c337e-1e28-4e34-9ccb-8ae546eee089-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qfcsk\" (UID: \"308c337e-1e28-4e34-9ccb-8ae546eee089\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840265 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9hxd\" (UniqueName: \"kubernetes.io/projected/68dcdc32-485b-435c-81cc-43be463998bb-kube-api-access-c9hxd\") pod \"openshift-apiserver-operator-796bbdcf4f-dnrhx\" (UID: \"68dcdc32-485b-435c-81cc-43be463998bb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840332 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-console-config\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840356 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16747491-2b7d-4cb1-841d-61c6f366cf8a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gv9x5\" (UID: \"16747491-2b7d-4cb1-841d-61c6f366cf8a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840376 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840416 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtv7h\" (UniqueName: \"kubernetes.io/projected/574225b6-a476-4591-9be9-ddd94e5281ef-kube-api-access-jtv7h\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840439 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9446a39-9776-4de8-9137-b8952d336419-service-ca-bundle\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840458 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840479 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htst2\" (UniqueName: \"kubernetes.io/projected/4f61e177-ea8e-4f61-8ba2-67906f08a00c-kube-api-access-htst2\") pod \"dns-operator-744455d44c-rx6nv\" (UID: \"4f61e177-ea8e-4f61-8ba2-67906f08a00c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840509 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m29d2\" (UniqueName: \"kubernetes.io/projected/65f71c2e-ab34-4d33-905f-609555dab78c-kube-api-access-m29d2\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840526 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-tmpfs\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840538 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-config\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840550 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2dad3db6-cddd-457d-8efa-908257ef7cc5-console-oauth-config\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840590 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4f61e177-ea8e-4f61-8ba2-67906f08a00c-metrics-tls\") pod \"dns-operator-744455d44c-rx6nv\" (UID: \"4f61e177-ea8e-4f61-8ba2-67906f08a00c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840606 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-apiservice-cert\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840625 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-image-import-ca\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840642 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16747491-2b7d-4cb1-841d-61c6f366cf8a-serving-cert\") pod \"openshift-config-operator-7777fb866f-gv9x5\" (UID: \"16747491-2b7d-4cb1-841d-61c6f366cf8a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840658 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e9446a39-9776-4de8-9137-b8952d336419-stats-auth\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840674 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f745af09-93ed-4c94-8411-f14b0aaaf1cf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vsxx4\" (UID: \"f745af09-93ed-4c94-8411-f14b0aaaf1cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840714 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-certs\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840742 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68dcdc32-485b-435c-81cc-43be463998bb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dnrhx\" (UID: \"68dcdc32-485b-435c-81cc-43be463998bb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840760 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840780 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4370babd-21ea-4c7e-81b9-cdd611127094-serving-cert\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840800 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87cb481-5261-4028-99e2-cd57ff6b61e1-config\") pod \"kube-apiserver-operator-766d6c64bb-gsfnm\" (UID: \"f87cb481-5261-4028-99e2-cd57ff6b61e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840815 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-node-bootstrap-token\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840847 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a67b9a8-ad8a-40e3-955c-53aed07a9140-config\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840877 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840901 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-webhook-cert\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840928 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhbq\" (UniqueName: \"kubernetes.io/projected/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-kube-api-access-sdhbq\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840709 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a67b9a8-ad8a-40e3-955c-53aed07a9140-trusted-ca\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840951 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.840987 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68dcdc32-485b-435c-81cc-43be463998bb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dnrhx\" (UID: \"68dcdc32-485b-435c-81cc-43be463998bb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841023 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/288e6358-c74b-4597-8968-726a31365f82-etcd-client\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841043 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e9446a39-9776-4de8-9137-b8952d336419-default-certificate\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841065 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d1224e-a43e-4dca-8ad0-239dc50b6d58-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmhx8\" (UID: \"a3d1224e-a43e-4dca-8ad0-239dc50b6d58\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841094 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f87cb481-5261-4028-99e2-cd57ff6b61e1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gsfnm\" (UID: \"f87cb481-5261-4028-99e2-cd57ff6b61e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841114 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw8xf\" (UniqueName: \"kubernetes.io/projected/8a67b9a8-ad8a-40e3-955c-53aed07a9140-kube-api-access-cw8xf\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841133 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841154 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqmnt\" (UniqueName: \"kubernetes.io/projected/8d7ccd8e-549f-4b23-bcec-4a1c14e10478-kube-api-access-lqmnt\") pod \"package-server-manager-789f6589d5-vqrw2\" (UID: \"8d7ccd8e-549f-4b23-bcec-4a1c14e10478\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841168 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16747491-2b7d-4cb1-841d-61c6f366cf8a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gv9x5\" (UID: \"16747491-2b7d-4cb1-841d-61c6f366cf8a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841173 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b42gj\" (UniqueName: \"kubernetes.io/projected/061a736b-3663-4a73-9836-19724fc4bb71-kube-api-access-b42gj\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841250 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-console-config\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841533 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68dcdc32-485b-435c-81cc-43be463998bb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dnrhx\" (UID: \"68dcdc32-485b-435c-81cc-43be463998bb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841727 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2dad3db6-cddd-457d-8efa-908257ef7cc5-service-ca\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.841945 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a67b9a8-ad8a-40e3-955c-53aed07a9140-config\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.842429 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhgtt\" (UniqueName: \"kubernetes.io/projected/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-kube-api-access-bhgtt\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.842507 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9446a39-9776-4de8-9137-b8952d336419-service-ca-bundle\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.842952 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/288e6358-c74b-4597-8968-726a31365f82-image-import-ca\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.842966 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88c9b59f-1809-4252-a8fc-cad965848dc0-proxy-tls\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.843135 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.843167 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9446a39-9776-4de8-9137-b8952d336419-metrics-certs\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.843362 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.843713 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.843888 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2dad3db6-cddd-457d-8efa-908257ef7cc5-console-oauth-config\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.845100 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e9446a39-9776-4de8-9137-b8952d336419-default-certificate\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.845256 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.845334 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c045a558-1990-432e-9965-e918f60aba14-proxy-tls\") pod \"machine-config-controller-84d6567774-dprzc\" (UID: \"c045a558-1990-432e-9965-e918f60aba14\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.845478 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c045a558-1990-432e-9965-e918f60aba14-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dprzc\" (UID: \"c045a558-1990-432e-9965-e918f60aba14\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.845561 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/288e6358-c74b-4597-8968-726a31365f82-encryption-config\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.845652 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16747491-2b7d-4cb1-841d-61c6f366cf8a-serving-cert\") pod \"openshift-config-operator-7777fb866f-gv9x5\" (UID: \"16747491-2b7d-4cb1-841d-61c6f366cf8a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.845666 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/288e6358-c74b-4597-8968-726a31365f82-etcd-client\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.846194 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a67b9a8-ad8a-40e3-955c-53aed07a9140-serving-cert\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.846311 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.846740 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.846769 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68dcdc32-485b-435c-81cc-43be463998bb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dnrhx\" (UID: \"68dcdc32-485b-435c-81cc-43be463998bb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.847062 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e9446a39-9776-4de8-9137-b8952d336419-stats-auth\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.847201 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.847256 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.866970 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.888604 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.897985 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65f71c2e-ab34-4d33-905f-609555dab78c-serving-cert\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.907362 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.910689 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-config\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.927479 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.935964 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-client-ca\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.943647 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-node-bootstrap-token\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.943701 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-webhook-cert\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.943743 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdhbq\" (UniqueName: \"kubernetes.io/projected/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-kube-api-access-sdhbq\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.943777 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.943834 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d1224e-a43e-4dca-8ad0-239dc50b6d58-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmhx8\" (UID: \"a3d1224e-a43e-4dca-8ad0-239dc50b6d58\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.943900 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqmnt\" (UniqueName: \"kubernetes.io/projected/8d7ccd8e-549f-4b23-bcec-4a1c14e10478-kube-api-access-lqmnt\") pod \"package-server-manager-789f6589d5-vqrw2\" (UID: \"8d7ccd8e-549f-4b23-bcec-4a1c14e10478\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.943931 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b42gj\" (UniqueName: \"kubernetes.io/projected/061a736b-3663-4a73-9836-19724fc4bb71-kube-api-access-b42gj\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.943966 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhgtt\" (UniqueName: \"kubernetes.io/projected/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-kube-api-access-bhgtt\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944029 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4370babd-21ea-4c7e-81b9-cdd611127094-etcd-service-ca\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944078 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9bvp\" (UniqueName: \"kubernetes.io/projected/4370babd-21ea-4c7e-81b9-cdd611127094-kube-api-access-d9bvp\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944125 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f745af09-93ed-4c94-8411-f14b0aaaf1cf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vsxx4\" (UID: \"f745af09-93ed-4c94-8411-f14b0aaaf1cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944165 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4370babd-21ea-4c7e-81b9-cdd611127094-config\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944232 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmqf8\" (UniqueName: \"kubernetes.io/projected/8b5afeda-c180-4aa9-a831-a0840d495fa8-kube-api-access-hmqf8\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944273 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4370babd-21ea-4c7e-81b9-cdd611127094-etcd-client\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944308 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a15050a-7cbd-40f6-a656-a68293c0878a-secret-volume\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944347 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1795c220-db74-434e-9111-917ff6d95077-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f64ps\" (UID: \"1795c220-db74-434e-9111-917ff6d95077\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944468 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-srv-cert\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944508 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-profile-collector-cert\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944538 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf6nt\" (UniqueName: \"kubernetes.io/projected/213d34f5-75cd-459c-9e56-2938fe5e3950-kube-api-access-lf6nt\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944586 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d80367da-a8d4-4b76-9b33-5526aad85229-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s4fb2\" (UID: \"d80367da-a8d4-4b76-9b33-5526aad85229\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944624 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krt27\" (UniqueName: \"kubernetes.io/projected/1795c220-db74-434e-9111-917ff6d95077-kube-api-access-krt27\") pod \"multus-admission-controller-857f4d67dd-f64ps\" (UID: \"1795c220-db74-434e-9111-917ff6d95077\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944674 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjztm\" (UniqueName: \"kubernetes.io/projected/0a15050a-7cbd-40f6-a656-a68293c0878a-kube-api-access-fjztm\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944713 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a15050a-7cbd-40f6-a656-a68293c0878a-config-volume\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944753 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d80367da-a8d4-4b76-9b33-5526aad85229-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s4fb2\" (UID: \"d80367da-a8d4-4b76-9b33-5526aad85229\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944775 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-serving-cert\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944795 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944814 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-srv-cert\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944844 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944875 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-config\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944906 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d1224e-a43e-4dca-8ad0-239dc50b6d58-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmhx8\" (UID: \"a3d1224e-a43e-4dca-8ad0-239dc50b6d58\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944931 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7ccd8e-549f-4b23-bcec-4a1c14e10478-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vqrw2\" (UID: \"8d7ccd8e-549f-4b23-bcec-4a1c14e10478\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944957 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx7xc\" (UniqueName: \"kubernetes.io/projected/f745af09-93ed-4c94-8411-f14b0aaaf1cf-kube-api-access-lx7xc\") pod \"openshift-controller-manager-operator-756b6f6bc6-vsxx4\" (UID: \"f745af09-93ed-4c94-8411-f14b0aaaf1cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.944980 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4370babd-21ea-4c7e-81b9-cdd611127094-etcd-ca\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945003 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d80367da-a8d4-4b76-9b33-5526aad85229-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s4fb2\" (UID: \"d80367da-a8d4-4b76-9b33-5526aad85229\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945029 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7t6g\" (UniqueName: \"kubernetes.io/projected/a3d1224e-a43e-4dca-8ad0-239dc50b6d58-kube-api-access-p7t6g\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmhx8\" (UID: \"a3d1224e-a43e-4dca-8ad0-239dc50b6d58\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945084 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtv7h\" (UniqueName: \"kubernetes.io/projected/574225b6-a476-4591-9be9-ddd94e5281ef-kube-api-access-jtv7h\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945127 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htst2\" (UniqueName: \"kubernetes.io/projected/4f61e177-ea8e-4f61-8ba2-67906f08a00c-kube-api-access-htst2\") pod \"dns-operator-744455d44c-rx6nv\" (UID: \"4f61e177-ea8e-4f61-8ba2-67906f08a00c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945169 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-tmpfs\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945199 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4f61e177-ea8e-4f61-8ba2-67906f08a00c-metrics-tls\") pod \"dns-operator-744455d44c-rx6nv\" (UID: \"4f61e177-ea8e-4f61-8ba2-67906f08a00c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945219 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-apiservice-cert\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945244 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f745af09-93ed-4c94-8411-f14b0aaaf1cf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vsxx4\" (UID: \"f745af09-93ed-4c94-8411-f14b0aaaf1cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945267 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-certs\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.945321 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4370babd-21ea-4c7e-81b9-cdd611127094-serving-cert\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.946586 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-tmpfs\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.947958 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.967364 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 11:42:40 crc kubenswrapper[4874]: I0122 11:42:40.986696 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.007877 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.017373 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f87cb481-5261-4028-99e2-cd57ff6b61e1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gsfnm\" (UID: \"f87cb481-5261-4028-99e2-cd57ff6b61e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.027510 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.034284 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87cb481-5261-4028-99e2-cd57ff6b61e1-config\") pod \"kube-apiserver-operator-766d6c64bb-gsfnm\" (UID: \"f87cb481-5261-4028-99e2-cd57ff6b61e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.047326 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.058079 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d1224e-a43e-4dca-8ad0-239dc50b6d58-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmhx8\" (UID: \"a3d1224e-a43e-4dca-8ad0-239dc50b6d58\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.068138 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.086900 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.107586 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.115948 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d1224e-a43e-4dca-8ad0-239dc50b6d58-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmhx8\" (UID: \"a3d1224e-a43e-4dca-8ad0-239dc50b6d58\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.126881 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.148145 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.168122 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.187917 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.208035 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.226958 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.247595 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.268141 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.278169 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4f61e177-ea8e-4f61-8ba2-67906f08a00c-metrics-tls\") pod \"dns-operator-744455d44c-rx6nv\" (UID: \"4f61e177-ea8e-4f61-8ba2-67906f08a00c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.288047 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.307866 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.328205 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.347603 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.367665 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.387385 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.395443 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d80367da-a8d4-4b76-9b33-5526aad85229-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s4fb2\" (UID: \"d80367da-a8d4-4b76-9b33-5526aad85229\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.407472 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.419309 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d80367da-a8d4-4b76-9b33-5526aad85229-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s4fb2\" (UID: \"d80367da-a8d4-4b76-9b33-5526aad85229\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.427871 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.447709 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.467653 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.487551 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.508501 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.527765 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.557788 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.568566 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.588150 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.607698 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.628123 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.640994 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f745af09-93ed-4c94-8411-f14b0aaaf1cf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vsxx4\" (UID: \"f745af09-93ed-4c94-8411-f14b0aaaf1cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.648224 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.655588 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f745af09-93ed-4c94-8411-f14b0aaaf1cf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vsxx4\" (UID: \"f745af09-93ed-4c94-8411-f14b0aaaf1cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.667798 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.711803 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nm6v\" (UniqueName: \"kubernetes.io/projected/f5464a23-ec80-4717-bfe0-6efeab811853-kube-api-access-2nm6v\") pod \"controller-manager-879f6c89f-swcnq\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.732660 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqws\" (UniqueName: \"kubernetes.io/projected/ca44490a-7fc8-478d-a6e2-670f49816b81-kube-api-access-vkqws\") pod \"apiserver-7bbb656c7d-fl8nd\" (UID: \"ca44490a-7fc8-478d-a6e2-670f49816b81\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.745692 4874 request.go:700] Waited for 1.017661557s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.749469 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nfcq\" (UniqueName: \"kubernetes.io/projected/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-kube-api-access-9nfcq\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.762382 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.767257 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b97ls\" (UniqueName: \"kubernetes.io/projected/9bec261f-fd1e-44f7-a402-cae34f722b6c-kube-api-access-b97ls\") pod \"machine-api-operator-5694c8668f-wmwcz\" (UID: \"9bec261f-fd1e-44f7-a402-cae34f722b6c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.781596 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsxpl\" (UniqueName: \"kubernetes.io/projected/6c9561fa-20b6-4f87-aacc-cf0e0665ffa4-kube-api-access-tsxpl\") pod \"machine-approver-56656f9798-ssbbs\" (UID: \"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.804239 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d49cfd6-9c1b-481f-a9d3-07a99661cf9d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7h7c7\" (UID: \"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.808125 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.814010 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.827749 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.830887 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.849779 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.851851 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4370babd-21ea-4c7e-81b9-cdd611127094-etcd-ca\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.861123 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4370babd-21ea-4c7e-81b9-cdd611127094-serving-cert\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.862674 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.867797 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.879300 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4370babd-21ea-4c7e-81b9-cdd611127094-etcd-client\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.887226 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.890440 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.895126 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4370babd-21ea-4c7e-81b9-cdd611127094-config\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:41 crc kubenswrapper[4874]: W0122 11:42:41.903889 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c9561fa_20b6_4f87_aacc_cf0e0665ffa4.slice/crio-c9acde3f2bdd745a092777b2d7badcee478c91b536adde065c962ce05336fe43 WatchSource:0}: Error finding container c9acde3f2bdd745a092777b2d7badcee478c91b536adde065c962ce05336fe43: Status 404 returned error can't find the container with id c9acde3f2bdd745a092777b2d7badcee478c91b536adde065c962ce05336fe43 Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.911050 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.915007 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4370babd-21ea-4c7e-81b9-cdd611127094-etcd-service-ca\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.930676 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.944418 4874 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.944515 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-node-bootstrap-token podName:061a736b-3663-4a73-9836-19724fc4bb71 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.444491104 +0000 UTC m=+136.289562174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-node-bootstrap-token") pod "machine-config-server-cfzn7" (UID: "061a736b-3663-4a73-9836-19724fc4bb71") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.945064 4874 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.945155 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d7ccd8e-549f-4b23-bcec-4a1c14e10478-package-server-manager-serving-cert podName:8d7ccd8e-549f-4b23-bcec-4a1c14e10478 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.445127794 +0000 UTC m=+136.290198904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8d7ccd8e-549f-4b23-bcec-4a1c14e10478-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-vqrw2" (UID: "8d7ccd8e-549f-4b23-bcec-4a1c14e10478") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.945183 4874 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.945216 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1795c220-db74-434e-9111-917ff6d95077-webhook-certs podName:1795c220-db74-434e-9111-917ff6d95077 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.445207567 +0000 UTC m=+136.290278637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1795c220-db74-434e-9111-917ff6d95077-webhook-certs") pod "multus-admission-controller-857f4d67dd-f64ps" (UID: "1795c220-db74-434e-9111-917ff6d95077") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.946353 4874 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.946419 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-apiservice-cert podName:1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.446406814 +0000 UTC m=+136.291477884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-apiservice-cert") pod "packageserver-d55dfcdfc-6pr5c" (UID: "1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.947636 4874 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.947724 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-certs podName:061a736b-3663-4a73-9836-19724fc4bb71 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.447701455 +0000 UTC m=+136.292772565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-certs") pod "machine-config-server-cfzn7" (UID: "061a736b-3663-4a73-9836-19724fc4bb71") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.950638 4874 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.950681 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-serving-cert podName:51b2c452-70bd-4a3e-bd74-c7cd6513fd45 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.450671869 +0000 UTC m=+136.295742939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-serving-cert") pod "service-ca-operator-777779d784-6zpzc" (UID: "51b2c452-70bd-4a3e-bd74-c7cd6513fd45") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951178 4874 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951199 4874 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951227 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-profile-collector-cert podName:574225b6-a476-4591-9be9-ddd94e5281ef nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.451216496 +0000 UTC m=+136.296287616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-profile-collector-cert") pod "olm-operator-6b444d44fb-b6c5m" (UID: "574225b6-a476-4591-9be9-ddd94e5281ef") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951252 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-srv-cert podName:574225b6-a476-4591-9be9-ddd94e5281ef nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.451240787 +0000 UTC m=+136.296311857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-srv-cert") pod "olm-operator-6b444d44fb-b6c5m" (UID: "574225b6-a476-4591-9be9-ddd94e5281ef") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951264 4874 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951270 4874 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951269 4874 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951287 4874 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951300 4874 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951310 4874 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951300 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-trusted-ca podName:213d34f5-75cd-459c-9e56-2938fe5e3950 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.451290959 +0000 UTC m=+136.296362099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-trusted-ca") pod "marketplace-operator-79b997595-6dvvn" (UID: "213d34f5-75cd-459c-9e56-2938fe5e3950") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951328 4874 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951328 4874 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951363 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-srv-cert podName:8b5afeda-c180-4aa9-a831-a0840d495fa8 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.45133641 +0000 UTC m=+136.296407480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-srv-cert") pod "catalog-operator-68c6474976-dt8w6" (UID: "8b5afeda-c180-4aa9-a831-a0840d495fa8") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951380 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-webhook-cert podName:1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.45137343 +0000 UTC m=+136.296444490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-webhook-cert") pod "packageserver-d55dfcdfc-6pr5c" (UID: "1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951421 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-config podName:51b2c452-70bd-4a3e-bd74-c7cd6513fd45 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.451387001 +0000 UTC m=+136.296458071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-config") pod "service-ca-operator-777779d784-6zpzc" (UID: "51b2c452-70bd-4a3e-bd74-c7cd6513fd45") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951439 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a15050a-7cbd-40f6-a656-a68293c0878a-secret-volume podName:0a15050a-7cbd-40f6-a656-a68293c0878a nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.451431642 +0000 UTC m=+136.296502712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/0a15050a-7cbd-40f6-a656-a68293c0878a-secret-volume") pod "collect-profiles-29484690-npncx" (UID: "0a15050a-7cbd-40f6-a656-a68293c0878a") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951451 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-operator-metrics podName:213d34f5-75cd-459c-9e56-2938fe5e3950 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.451445152 +0000 UTC m=+136.296516222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-operator-metrics") pod "marketplace-operator-79b997595-6dvvn" (UID: "213d34f5-75cd-459c-9e56-2938fe5e3950") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951464 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-profile-collector-cert podName:8b5afeda-c180-4aa9-a831-a0840d495fa8 nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.451458353 +0000 UTC m=+136.296529423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-profile-collector-cert") pod "catalog-operator-68c6474976-dt8w6" (UID: "8b5afeda-c180-4aa9-a831-a0840d495fa8") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: E0122 11:42:41.951476 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a15050a-7cbd-40f6-a656-a68293c0878a-config-volume podName:0a15050a-7cbd-40f6-a656-a68293c0878a nodeName:}" failed. No retries permitted until 2026-01-22 11:42:42.451470603 +0000 UTC m=+136.296541673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0a15050a-7cbd-40f6-a656-a68293c0878a-config-volume") pod "collect-profiles-29484690-npncx" (UID: "0a15050a-7cbd-40f6-a656-a68293c0878a") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.952999 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.982282 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swcnq"] Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.986147 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7qdr\" (UniqueName: \"kubernetes.io/projected/5fb7b4d4-0441-43c4-9596-4d38b369d661-kube-api-access-x7qdr\") pod \"authentication-operator-69f744f599-869fn\" (UID: \"5fb7b4d4-0441-43c4-9596-4d38b369d661\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:41 crc kubenswrapper[4874]: I0122 11:42:41.987986 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 11:42:41 crc kubenswrapper[4874]: W0122 11:42:41.994435 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5464a23_ec80_4717_bfe0_6efeab811853.slice/crio-d437fa062da853d4db086da0d6818e76b56a3a9b163a7e48bb221915ade96982 WatchSource:0}: Error finding container d437fa062da853d4db086da0d6818e76b56a3a9b163a7e48bb221915ade96982: Status 404 returned error can't find the container with id d437fa062da853d4db086da0d6818e76b56a3a9b163a7e48bb221915ade96982 Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.007726 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.011205 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7"] Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.027222 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.048987 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.067599 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.087388 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.108946 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.128014 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.144796 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.146769 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.168583 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.187925 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.207499 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.228001 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.248319 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.254798 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd"] Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.262625 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wmwcz"] Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.266996 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.287425 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.308661 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-869fn"] Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.309267 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.327185 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.354642 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.368475 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.387900 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.408665 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.427427 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.448985 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.469763 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.470256 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a15050a-7cbd-40f6-a656-a68293c0878a-secret-volume\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.470353 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1795c220-db74-434e-9111-917ff6d95077-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f64ps\" (UID: \"1795c220-db74-434e-9111-917ff6d95077\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.470513 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-srv-cert\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.470596 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-profile-collector-cert\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.470879 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a15050a-7cbd-40f6-a656-a68293c0878a-config-volume\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.472516 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a15050a-7cbd-40f6-a656-a68293c0878a-config-volume\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.472581 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-serving-cert\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.472623 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.472646 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-srv-cert\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.472685 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.472712 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-config\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.472738 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7ccd8e-549f-4b23-bcec-4a1c14e10478-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vqrw2\" (UID: \"8d7ccd8e-549f-4b23-bcec-4a1c14e10478\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.472928 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-apiservice-cert\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.472992 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-certs\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.473492 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-node-bootstrap-token\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.473528 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-webhook-cert\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.473565 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.473808 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.476008 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a15050a-7cbd-40f6-a656-a68293c0878a-secret-volume\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.476105 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-srv-cert\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.476224 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-profile-collector-cert\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.476246 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1795c220-db74-434e-9111-917ff6d95077-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f64ps\" (UID: \"1795c220-db74-434e-9111-917ff6d95077\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.476830 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-webhook-cert\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.477902 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/574225b6-a476-4591-9be9-ddd94e5281ef-profile-collector-cert\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.478026 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.478440 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7ccd8e-549f-4b23-bcec-4a1c14e10478-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vqrw2\" (UID: \"8d7ccd8e-549f-4b23-bcec-4a1c14e10478\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.478632 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-apiservice-cert\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.478773 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b5afeda-c180-4aa9-a831-a0840d495fa8-srv-cert\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.479501 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-certs\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.488108 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.507478 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.513391 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-config\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.528622 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.537250 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-serving-cert\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.548595 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.564582 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/061a736b-3663-4a73-9836-19724fc4bb71-node-bootstrap-token\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.568075 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: W0122 11:42:42.573479 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fb7b4d4_0441_43c4_9596_4d38b369d661.slice/crio-dfe1aa3aa59fbea3da4a9f6c5658128185f20e5c711d45984fad83212361817f WatchSource:0}: Error finding container dfe1aa3aa59fbea3da4a9f6c5658128185f20e5c711d45984fad83212361817f: Status 404 returned error can't find the container with id dfe1aa3aa59fbea3da4a9f6c5658128185f20e5c711d45984fad83212361817f Jan 22 11:42:42 crc kubenswrapper[4874]: W0122 11:42:42.574500 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bec261f_fd1e_44f7_a402_cae34f722b6c.slice/crio-ade3f71032e555ef85241c901460891b890f50c924ff1e07590f13a11e915f4d WatchSource:0}: Error finding container ade3f71032e555ef85241c901460891b890f50c924ff1e07590f13a11e915f4d: Status 404 returned error can't find the container with id ade3f71032e555ef85241c901460891b890f50c924ff1e07590f13a11e915f4d Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.612290 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.631513 4874 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.647763 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.668803 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.687283 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.707824 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.727967 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.748048 4874 request.go:700] Waited for 1.925234271s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.748748 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" event={"ID":"5fb7b4d4-0441-43c4-9596-4d38b369d661","Type":"ContainerStarted","Data":"ba72096e6e9776585fd6ae533977b340482846d49d5ec7499f9db5b281e7bd91"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.748788 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" event={"ID":"5fb7b4d4-0441-43c4-9596-4d38b369d661","Type":"ContainerStarted","Data":"dfe1aa3aa59fbea3da4a9f6c5658128185f20e5c711d45984fad83212361817f"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.750902 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.751379 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" event={"ID":"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d","Type":"ContainerStarted","Data":"5f0f0f78b2e78150f4a7126b5457975a21b1e7da3a542e7d9eb284ddb9612d7f"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.751484 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" event={"ID":"0d49cfd6-9c1b-481f-a9d3-07a99661cf9d","Type":"ContainerStarted","Data":"50cb5bac57621b81371da75029472795c61518f97e75a563aeb7b8f76bccb9ca"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.752616 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" event={"ID":"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4","Type":"ContainerStarted","Data":"9c364901ef224c02f1d83fab863948c744f3e6bdc3501b8cd95f34c82feeff08"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.752651 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" event={"ID":"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4","Type":"ContainerStarted","Data":"c9acde3f2bdd745a092777b2d7badcee478c91b536adde065c962ce05336fe43"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.753507 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" event={"ID":"ca44490a-7fc8-478d-a6e2-670f49816b81","Type":"ContainerStarted","Data":"bddb32e3c72065aeabce6eb546394e141aac0a1229a0392b581610208d4bd605"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.754566 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" event={"ID":"9bec261f-fd1e-44f7-a402-cae34f722b6c","Type":"ContainerStarted","Data":"d7e2aada04e2e1a91a486222e031fd03b0a873ea166c39ee1740647dbdb35646"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.754597 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" event={"ID":"9bec261f-fd1e-44f7-a402-cae34f722b6c","Type":"ContainerStarted","Data":"ade3f71032e555ef85241c901460891b890f50c924ff1e07590f13a11e915f4d"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.755510 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" event={"ID":"f5464a23-ec80-4717-bfe0-6efeab811853","Type":"ContainerStarted","Data":"fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.755536 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" event={"ID":"f5464a23-ec80-4717-bfe0-6efeab811853","Type":"ContainerStarted","Data":"d437fa062da853d4db086da0d6818e76b56a3a9b163a7e48bb221915ade96982"} Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.755731 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.757373 4874 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-swcnq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.757438 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" podUID="f5464a23-ec80-4717-bfe0-6efeab811853" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.767539 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.788315 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.823780 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2xwq\" (UniqueName: \"kubernetes.io/projected/308c337e-1e28-4e34-9ccb-8ae546eee089-kube-api-access-j2xwq\") pod \"cluster-samples-operator-665b6dd947-qfcsk\" (UID: \"308c337e-1e28-4e34-9ccb-8ae546eee089\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.841335 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kg7v\" (UniqueName: \"kubernetes.io/projected/16747491-2b7d-4cb1-841d-61c6f366cf8a-kube-api-access-2kg7v\") pod \"openshift-config-operator-7777fb866f-gv9x5\" (UID: \"16747491-2b7d-4cb1-841d-61c6f366cf8a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.850249 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.871476 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9jtx\" (UniqueName: \"kubernetes.io/projected/c045a558-1990-432e-9965-e918f60aba14-kube-api-access-b9jtx\") pod \"machine-config-controller-84d6567774-dprzc\" (UID: \"c045a558-1990-432e-9965-e918f60aba14\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.879870 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.883937 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh722\" (UniqueName: \"kubernetes.io/projected/304c66b8-6187-47ac-9c57-235c634eaae4-kube-api-access-sh722\") pod \"migrator-59844c95c7-rwcrv\" (UID: \"304c66b8-6187-47ac-9c57-235c634eaae4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.901668 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rprpc\" (UniqueName: \"kubernetes.io/projected/2dad3db6-cddd-457d-8efa-908257ef7cc5-kube-api-access-rprpc\") pod \"console-f9d7485db-wws2s\" (UID: \"2dad3db6-cddd-457d-8efa-908257ef7cc5\") " pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.903281 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.925667 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.926774 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj795\" (UniqueName: \"kubernetes.io/projected/3b9983ba-ed8d-4654-ba74-f25433aa7ee7-kube-api-access-gj795\") pod \"downloads-7954f5f757-jgw4c\" (UID: \"3b9983ba-ed8d-4654-ba74-f25433aa7ee7\") " pod="openshift-console/downloads-7954f5f757-jgw4c" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.944320 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dm8p\" (UniqueName: \"kubernetes.io/projected/88c9b59f-1809-4252-a8fc-cad965848dc0-kube-api-access-8dm8p\") pod \"machine-config-operator-74547568cd-6lgxn\" (UID: \"88c9b59f-1809-4252-a8fc-cad965848dc0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.968937 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmkdz\" (UniqueName: \"kubernetes.io/projected/288e6358-c74b-4597-8968-726a31365f82-kube-api-access-qmkdz\") pod \"apiserver-76f77b778f-cqkbr\" (UID: \"288e6358-c74b-4597-8968-726a31365f82\") " pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:42 crc kubenswrapper[4874]: I0122 11:42:42.989017 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvhz9\" (UniqueName: \"kubernetes.io/projected/e9446a39-9776-4de8-9137-b8952d336419-kube-api-access-lvhz9\") pod \"router-default-5444994796-5cczs\" (UID: \"e9446a39-9776-4de8-9137-b8952d336419\") " pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.010105 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9gf8\" (UniqueName: \"kubernetes.io/projected/9fd4241f-b523-4d66-bcdb-c3bb691765c9-kube-api-access-h9gf8\") pod \"oauth-openshift-558db77b4-8j9tz\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.043790 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29d2\" (UniqueName: \"kubernetes.io/projected/65f71c2e-ab34-4d33-905f-609555dab78c-kube-api-access-m29d2\") pod \"route-controller-manager-6576b87f9c-2bn87\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.061715 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk"] Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.067521 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f87cb481-5261-4028-99e2-cd57ff6b61e1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gsfnm\" (UID: \"f87cb481-5261-4028-99e2-cd57ff6b61e1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.084796 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9hxd\" (UniqueName: \"kubernetes.io/projected/68dcdc32-485b-435c-81cc-43be463998bb-kube-api-access-c9hxd\") pod \"openshift-apiserver-operator-796bbdcf4f-dnrhx\" (UID: \"68dcdc32-485b-435c-81cc-43be463998bb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.104661 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5"] Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.104923 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jgw4c" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.110989 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw8xf\" (UniqueName: \"kubernetes.io/projected/8a67b9a8-ad8a-40e3-955c-53aed07a9140-kube-api-access-cw8xf\") pod \"console-operator-58897d9998-mftbv\" (UID: \"8a67b9a8-ad8a-40e3-955c-53aed07a9140\") " pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.122225 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.124061 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdhbq\" (UniqueName: \"kubernetes.io/projected/1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a-kube-api-access-sdhbq\") pod \"packageserver-d55dfcdfc-6pr5c\" (UID: \"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.124323 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.137200 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc"] Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.144498 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqmnt\" (UniqueName: \"kubernetes.io/projected/8d7ccd8e-549f-4b23-bcec-4a1c14e10478-kube-api-access-lqmnt\") pod \"package-server-manager-789f6589d5-vqrw2\" (UID: \"8d7ccd8e-549f-4b23-bcec-4a1c14e10478\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.159606 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.161498 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b42gj\" (UniqueName: \"kubernetes.io/projected/061a736b-3663-4a73-9836-19724fc4bb71-kube-api-access-b42gj\") pod \"machine-config-server-cfzn7\" (UID: \"061a736b-3663-4a73-9836-19724fc4bb71\") " pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.166941 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv"] Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.172439 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.181445 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhgtt\" (UniqueName: \"kubernetes.io/projected/51b2c452-70bd-4a3e-bd74-c7cd6513fd45-kube-api-access-bhgtt\") pod \"service-ca-operator-777779d784-6zpzc\" (UID: \"51b2c452-70bd-4a3e-bd74-c7cd6513fd45\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.188269 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.203948 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.213959 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9bvp\" (UniqueName: \"kubernetes.io/projected/4370babd-21ea-4c7e-81b9-cdd611127094-kube-api-access-d9bvp\") pod \"etcd-operator-b45778765-sjhjk\" (UID: \"4370babd-21ea-4c7e-81b9-cdd611127094\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.215548 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.225823 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmqf8\" (UniqueName: \"kubernetes.io/projected/8b5afeda-c180-4aa9-a831-a0840d495fa8-kube-api-access-hmqf8\") pod \"catalog-operator-68c6474976-dt8w6\" (UID: \"8b5afeda-c180-4aa9-a831-a0840d495fa8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.232559 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.243733 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf6nt\" (UniqueName: \"kubernetes.io/projected/213d34f5-75cd-459c-9e56-2938fe5e3950-kube-api-access-lf6nt\") pod \"marketplace-operator-79b997595-6dvvn\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.273812 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krt27\" (UniqueName: \"kubernetes.io/projected/1795c220-db74-434e-9111-917ff6d95077-kube-api-access-krt27\") pod \"multus-admission-controller-857f4d67dd-f64ps\" (UID: \"1795c220-db74-434e-9111-917ff6d95077\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.290545 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.292593 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjztm\" (UniqueName: \"kubernetes.io/projected/0a15050a-7cbd-40f6-a656-a68293c0878a-kube-api-access-fjztm\") pod \"collect-profiles-29484690-npncx\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.299300 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.305507 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx7xc\" (UniqueName: \"kubernetes.io/projected/f745af09-93ed-4c94-8411-f14b0aaaf1cf-kube-api-access-lx7xc\") pod \"openshift-controller-manager-operator-756b6f6bc6-vsxx4\" (UID: \"f745af09-93ed-4c94-8411-f14b0aaaf1cf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.329290 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d80367da-a8d4-4b76-9b33-5526aad85229-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s4fb2\" (UID: \"d80367da-a8d4-4b76-9b33-5526aad85229\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.343868 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.355164 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7t6g\" (UniqueName: \"kubernetes.io/projected/a3d1224e-a43e-4dca-8ad0-239dc50b6d58-kube-api-access-p7t6g\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmhx8\" (UID: \"a3d1224e-a43e-4dca-8ad0-239dc50b6d58\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.371844 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.376368 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.382963 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.391268 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.416693 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.421744 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.434407 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cfzn7" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.440578 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.450134 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htst2\" (UniqueName: \"kubernetes.io/projected/4f61e177-ea8e-4f61-8ba2-67906f08a00c-kube-api-access-htst2\") pod \"dns-operator-744455d44c-rx6nv\" (UID: \"4f61e177-ea8e-4f61-8ba2-67906f08a00c\") " pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.451847 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtv7h\" (UniqueName: \"kubernetes.io/projected/574225b6-a476-4591-9be9-ddd94e5281ef-kube-api-access-jtv7h\") pod \"olm-operator-6b444d44fb-b6c5m\" (UID: \"574225b6-a476-4591-9be9-ddd94e5281ef\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.463802 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489738 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gp99\" (UniqueName: \"kubernetes.io/projected/67454032-f4d2-418e-afad-a48c0a80007d-kube-api-access-6gp99\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489776 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67454032-f4d2-418e-afad-a48c0a80007d-trusted-ca\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489792 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-trusted-ca\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489809 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90a037cd-cacf-4706-a857-f65c8f16c384-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489829 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a785e748-a557-4c24-8a4d-dc03cfc3c357-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6jqjk\" (UID: \"a785e748-a557-4c24-8a4d-dc03cfc3c357\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489847 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90a037cd-cacf-4706-a857-f65c8f16c384-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489863 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee5bb84-2932-4f1c-8af7-39e10c1260c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vpjnl\" (UID: \"4ee5bb84-2932-4f1c-8af7-39e10c1260c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489877 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6a39a4e6-a146-4ff3-8e90-eb516fe27e38-signing-cabundle\") pod \"service-ca-9c57cc56f-dwcjx\" (UID: \"6a39a4e6-a146-4ff3-8e90-eb516fe27e38\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489900 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67454032-f4d2-418e-afad-a48c0a80007d-metrics-tls\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489915 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6a39a4e6-a146-4ff3-8e90-eb516fe27e38-signing-key\") pod \"service-ca-9c57cc56f-dwcjx\" (UID: \"6a39a4e6-a146-4ff3-8e90-eb516fe27e38\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489945 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489971 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee5bb84-2932-4f1c-8af7-39e10c1260c2-config\") pod \"kube-controller-manager-operator-78b949d7b-vpjnl\" (UID: \"4ee5bb84-2932-4f1c-8af7-39e10c1260c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.489989 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp7dh\" (UniqueName: \"kubernetes.io/projected/a785e748-a557-4c24-8a4d-dc03cfc3c357-kube-api-access-lp7dh\") pod \"control-plane-machine-set-operator-78cbb6b69f-6jqjk\" (UID: \"a785e748-a557-4c24-8a4d-dc03cfc3c357\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.490006 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-registry-tls\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.490020 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-bound-sa-token\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.490035 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwtlg\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-kube-api-access-zwtlg\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.490085 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee5bb84-2932-4f1c-8af7-39e10c1260c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vpjnl\" (UID: \"4ee5bb84-2932-4f1c-8af7-39e10c1260c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.490113 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-registry-certificates\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.490155 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rb79\" (UniqueName: \"kubernetes.io/projected/6a39a4e6-a146-4ff3-8e90-eb516fe27e38-kube-api-access-7rb79\") pod \"service-ca-9c57cc56f-dwcjx\" (UID: \"6a39a4e6-a146-4ff3-8e90-eb516fe27e38\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.490188 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67454032-f4d2-418e-afad-a48c0a80007d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: E0122 11:42:43.492373 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:43.992361738 +0000 UTC m=+137.837432808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.596726 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597167 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67454032-f4d2-418e-afad-a48c0a80007d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597261 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjtx5\" (UniqueName: \"kubernetes.io/projected/41a97907-0515-44cd-b6a4-87463a9b3819-kube-api-access-vjtx5\") pod \"dns-default-kxvzk\" (UID: \"41a97907-0515-44cd-b6a4-87463a9b3819\") " pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597365 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-registration-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597482 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-csi-data-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597554 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6zw\" (UniqueName: \"kubernetes.io/projected/a54b65f9-405b-4a90-a9eb-e0438722ffa8-kube-api-access-sl6zw\") pod \"ingress-canary-s5xxd\" (UID: \"a54b65f9-405b-4a90-a9eb-e0438722ffa8\") " pod="openshift-ingress-canary/ingress-canary-s5xxd" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597582 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gp99\" (UniqueName: \"kubernetes.io/projected/67454032-f4d2-418e-afad-a48c0a80007d-kube-api-access-6gp99\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597607 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67454032-f4d2-418e-afad-a48c0a80007d-trusted-ca\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597651 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90a037cd-cacf-4706-a857-f65c8f16c384-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597666 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-trusted-ca\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597680 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a54b65f9-405b-4a90-a9eb-e0438722ffa8-cert\") pod \"ingress-canary-s5xxd\" (UID: \"a54b65f9-405b-4a90-a9eb-e0438722ffa8\") " pod="openshift-ingress-canary/ingress-canary-s5xxd" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597707 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-socket-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597721 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trsjs\" (UniqueName: \"kubernetes.io/projected/7a9c3919-d376-4655-818e-3a88a8c0b883-kube-api-access-trsjs\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597757 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a785e748-a557-4c24-8a4d-dc03cfc3c357-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6jqjk\" (UID: \"a785e748-a557-4c24-8a4d-dc03cfc3c357\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597776 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90a037cd-cacf-4706-a857-f65c8f16c384-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597800 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6a39a4e6-a146-4ff3-8e90-eb516fe27e38-signing-cabundle\") pod \"service-ca-9c57cc56f-dwcjx\" (UID: \"6a39a4e6-a146-4ff3-8e90-eb516fe27e38\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597815 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee5bb84-2932-4f1c-8af7-39e10c1260c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vpjnl\" (UID: \"4ee5bb84-2932-4f1c-8af7-39e10c1260c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597864 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67454032-f4d2-418e-afad-a48c0a80007d-metrics-tls\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597888 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6a39a4e6-a146-4ff3-8e90-eb516fe27e38-signing-key\") pod \"service-ca-9c57cc56f-dwcjx\" (UID: \"6a39a4e6-a146-4ff3-8e90-eb516fe27e38\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597934 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41a97907-0515-44cd-b6a4-87463a9b3819-metrics-tls\") pod \"dns-default-kxvzk\" (UID: \"41a97907-0515-44cd-b6a4-87463a9b3819\") " pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.597963 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee5bb84-2932-4f1c-8af7-39e10c1260c2-config\") pod \"kube-controller-manager-operator-78b949d7b-vpjnl\" (UID: \"4ee5bb84-2932-4f1c-8af7-39e10c1260c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598010 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp7dh\" (UniqueName: \"kubernetes.io/projected/a785e748-a557-4c24-8a4d-dc03cfc3c357-kube-api-access-lp7dh\") pod \"control-plane-machine-set-operator-78cbb6b69f-6jqjk\" (UID: \"a785e748-a557-4c24-8a4d-dc03cfc3c357\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598037 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-registry-tls\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598097 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwtlg\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-kube-api-access-zwtlg\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598119 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-plugins-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598168 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-bound-sa-token\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598227 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee5bb84-2932-4f1c-8af7-39e10c1260c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vpjnl\" (UID: \"4ee5bb84-2932-4f1c-8af7-39e10c1260c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598284 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-registry-certificates\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598317 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41a97907-0515-44cd-b6a4-87463a9b3819-config-volume\") pod \"dns-default-kxvzk\" (UID: \"41a97907-0515-44cd-b6a4-87463a9b3819\") " pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598364 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rb79\" (UniqueName: \"kubernetes.io/projected/6a39a4e6-a146-4ff3-8e90-eb516fe27e38-kube-api-access-7rb79\") pod \"service-ca-9c57cc56f-dwcjx\" (UID: \"6a39a4e6-a146-4ff3-8e90-eb516fe27e38\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.598418 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-mountpoint-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: E0122 11:42:43.598653 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.098616278 +0000 UTC m=+137.943687358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.599225 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90a037cd-cacf-4706-a857-f65c8f16c384-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.599938 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-trusted-ca\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.604370 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-registry-certificates\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.607314 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/67454032-f4d2-418e-afad-a48c0a80007d-trusted-ca\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.608161 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ee5bb84-2932-4f1c-8af7-39e10c1260c2-config\") pod \"kube-controller-manager-operator-78b949d7b-vpjnl\" (UID: \"4ee5bb84-2932-4f1c-8af7-39e10c1260c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.610704 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a785e748-a557-4c24-8a4d-dc03cfc3c357-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6jqjk\" (UID: \"a785e748-a557-4c24-8a4d-dc03cfc3c357\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.613694 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6a39a4e6-a146-4ff3-8e90-eb516fe27e38-signing-cabundle\") pod \"service-ca-9c57cc56f-dwcjx\" (UID: \"6a39a4e6-a146-4ff3-8e90-eb516fe27e38\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.613931 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.622967 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6a39a4e6-a146-4ff3-8e90-eb516fe27e38-signing-key\") pod \"service-ca-9c57cc56f-dwcjx\" (UID: \"6a39a4e6-a146-4ff3-8e90-eb516fe27e38\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.629661 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.640370 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/67454032-f4d2-418e-afad-a48c0a80007d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.640898 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rb79\" (UniqueName: \"kubernetes.io/projected/6a39a4e6-a146-4ff3-8e90-eb516fe27e38-kube-api-access-7rb79\") pod \"service-ca-9c57cc56f-dwcjx\" (UID: \"6a39a4e6-a146-4ff3-8e90-eb516fe27e38\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.644691 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90a037cd-cacf-4706-a857-f65c8f16c384-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.648123 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67454032-f4d2-418e-afad-a48c0a80007d-metrics-tls\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.659263 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-registry-tls\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.665119 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwtlg\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-kube-api-access-zwtlg\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.667759 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp7dh\" (UniqueName: \"kubernetes.io/projected/a785e748-a557-4c24-8a4d-dc03cfc3c357-kube-api-access-lp7dh\") pod \"control-plane-machine-set-operator-78cbb6b69f-6jqjk\" (UID: \"a785e748-a557-4c24-8a4d-dc03cfc3c357\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.671971 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ee5bb84-2932-4f1c-8af7-39e10c1260c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vpjnl\" (UID: \"4ee5bb84-2932-4f1c-8af7-39e10c1260c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.673749 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jgw4c"] Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.686786 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gp99\" (UniqueName: \"kubernetes.io/projected/67454032-f4d2-418e-afad-a48c0a80007d-kube-api-access-6gp99\") pod \"ingress-operator-5b745b69d9-89ldw\" (UID: \"67454032-f4d2-418e-afad-a48c0a80007d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.696920 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee5bb84-2932-4f1c-8af7-39e10c1260c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vpjnl\" (UID: \"4ee5bb84-2932-4f1c-8af7-39e10c1260c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.702003 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-bound-sa-token\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.702312 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703320 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703365 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41a97907-0515-44cd-b6a4-87463a9b3819-metrics-tls\") pod \"dns-default-kxvzk\" (UID: \"41a97907-0515-44cd-b6a4-87463a9b3819\") " pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703390 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-plugins-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703437 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41a97907-0515-44cd-b6a4-87463a9b3819-config-volume\") pod \"dns-default-kxvzk\" (UID: \"41a97907-0515-44cd-b6a4-87463a9b3819\") " pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703464 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-mountpoint-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703494 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjtx5\" (UniqueName: \"kubernetes.io/projected/41a97907-0515-44cd-b6a4-87463a9b3819-kube-api-access-vjtx5\") pod \"dns-default-kxvzk\" (UID: \"41a97907-0515-44cd-b6a4-87463a9b3819\") " pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703515 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-registration-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703544 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-csi-data-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703571 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl6zw\" (UniqueName: \"kubernetes.io/projected/a54b65f9-405b-4a90-a9eb-e0438722ffa8-kube-api-access-sl6zw\") pod \"ingress-canary-s5xxd\" (UID: \"a54b65f9-405b-4a90-a9eb-e0438722ffa8\") " pod="openshift-ingress-canary/ingress-canary-s5xxd" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703592 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a54b65f9-405b-4a90-a9eb-e0438722ffa8-cert\") pod \"ingress-canary-s5xxd\" (UID: \"a54b65f9-405b-4a90-a9eb-e0438722ffa8\") " pod="openshift-ingress-canary/ingress-canary-s5xxd" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703614 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-socket-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.703633 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trsjs\" (UniqueName: \"kubernetes.io/projected/7a9c3919-d376-4655-818e-3a88a8c0b883-kube-api-access-trsjs\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: E0122 11:42:43.704045 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.204030691 +0000 UTC m=+138.049101761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.704839 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-plugins-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.704930 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-csi-data-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.705221 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-mountpoint-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.705365 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-registration-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.705440 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7a9c3919-d376-4655-818e-3a88a8c0b883-socket-dir\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.706368 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41a97907-0515-44cd-b6a4-87463a9b3819-config-volume\") pod \"dns-default-kxvzk\" (UID: \"41a97907-0515-44cd-b6a4-87463a9b3819\") " pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.709099 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mftbv"] Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.713116 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a54b65f9-405b-4a90-a9eb-e0438722ffa8-cert\") pod \"ingress-canary-s5xxd\" (UID: \"a54b65f9-405b-4a90-a9eb-e0438722ffa8\") " pod="openshift-ingress-canary/ingress-canary-s5xxd" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.730757 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41a97907-0515-44cd-b6a4-87463a9b3819-metrics-tls\") pod \"dns-default-kxvzk\" (UID: \"41a97907-0515-44cd-b6a4-87463a9b3819\") " pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.749645 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trsjs\" (UniqueName: \"kubernetes.io/projected/7a9c3919-d376-4655-818e-3a88a8c0b883-kube-api-access-trsjs\") pod \"csi-hostpathplugin-l87tw\" (UID: \"7a9c3919-d376-4655-818e-3a88a8c0b883\") " pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.750077 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.761064 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjtx5\" (UniqueName: \"kubernetes.io/projected/41a97907-0515-44cd-b6a4-87463a9b3819-kube-api-access-vjtx5\") pod \"dns-default-kxvzk\" (UID: \"41a97907-0515-44cd-b6a4-87463a9b3819\") " pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.792774 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl6zw\" (UniqueName: \"kubernetes.io/projected/a54b65f9-405b-4a90-a9eb-e0438722ffa8-kube-api-access-sl6zw\") pod \"ingress-canary-s5xxd\" (UID: \"a54b65f9-405b-4a90-a9eb-e0438722ffa8\") " pod="openshift-ingress-canary/ingress-canary-s5xxd" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.795227 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8j9tz"] Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.797926 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l87tw" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.804561 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:43 crc kubenswrapper[4874]: E0122 11:42:43.804976 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.304960374 +0000 UTC m=+138.150031444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.808607 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.816358 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" event={"ID":"c045a558-1990-432e-9965-e918f60aba14","Type":"ContainerStarted","Data":"8467f2a46fbebb9e4dc1def39d91236ab29caa6aee23ce46778cfaaf0d83ef57"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.816417 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" event={"ID":"c045a558-1990-432e-9965-e918f60aba14","Type":"ContainerStarted","Data":"5017eeb33b0ae024aedc6a079168c60ce6127106057a25534829aeb4246ab6ea"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.830700 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" event={"ID":"308c337e-1e28-4e34-9ccb-8ae546eee089","Type":"ContainerStarted","Data":"752c9a819a1d05cac2b0fb64e0acd5937788e935466322f6a1fed0a7fe7f456a"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.830762 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" event={"ID":"308c337e-1e28-4e34-9ccb-8ae546eee089","Type":"ContainerStarted","Data":"c0027027d3937714e90c671faaa05115d50098db457aa8f62164545362bdefe2"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.831760 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cfzn7" event={"ID":"061a736b-3663-4a73-9836-19724fc4bb71","Type":"ContainerStarted","Data":"a1e0b7129a01f83cf10c32190ce32afcee33053723d51469859d7cc38a7fe962"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.833231 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cczs" event={"ID":"e9446a39-9776-4de8-9137-b8952d336419","Type":"ContainerStarted","Data":"9ebf022f68edda676021cc0fbeb0a68174f6ad4fbc2e4ef2d9bb751280d4ec65"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.833277 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cczs" event={"ID":"e9446a39-9776-4de8-9137-b8952d336419","Type":"ContainerStarted","Data":"1fdc6176317759ddd858337b2ed6123236c395a7fb1b03f05858040cdad255cb"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.846519 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv" event={"ID":"304c66b8-6187-47ac-9c57-235c634eaae4","Type":"ContainerStarted","Data":"a4a37b7be2b2f997a241b5ad06f4719213fa43e63628783dad8f0e263be8b6f5"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.846565 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv" event={"ID":"304c66b8-6187-47ac-9c57-235c634eaae4","Type":"ContainerStarted","Data":"220af10eecb4f8ff7f55663bdec7f3f351e2a96907240f1f43525f975f7ee9e2"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.858232 4874 generic.go:334] "Generic (PLEG): container finished" podID="16747491-2b7d-4cb1-841d-61c6f366cf8a" containerID="72a7a981031cece1c7f7c83a7047c6627dc5039ed7b07331ef32054a2f4fe2ca" exitCode=0 Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.858290 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" event={"ID":"16747491-2b7d-4cb1-841d-61c6f366cf8a","Type":"ContainerDied","Data":"72a7a981031cece1c7f7c83a7047c6627dc5039ed7b07331ef32054a2f4fe2ca"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.858311 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" event={"ID":"16747491-2b7d-4cb1-841d-61c6f366cf8a","Type":"ContainerStarted","Data":"b8aa1c4a3f08c4320af86054c2394b97a51506fdbd487a72ccf2be7d5442dde2"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.870021 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" event={"ID":"6c9561fa-20b6-4f87-aacc-cf0e0665ffa4","Type":"ContainerStarted","Data":"db5d00e60244a0cbee5256d6ffedf435df83bed45bda5a66f991aeebaf5871e4"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.871912 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c"] Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.873848 4874 generic.go:334] "Generic (PLEG): container finished" podID="ca44490a-7fc8-478d-a6e2-670f49816b81" containerID="0d06809c08191501d88b6b9c6ff0fe96ceb607cba26a2b0224d79047b5a8139f" exitCode=0 Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.873898 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" event={"ID":"ca44490a-7fc8-478d-a6e2-670f49816b81","Type":"ContainerDied","Data":"0d06809c08191501d88b6b9c6ff0fe96ceb607cba26a2b0224d79047b5a8139f"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.877770 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" event={"ID":"9bec261f-fd1e-44f7-a402-cae34f722b6c","Type":"ContainerStarted","Data":"fcf3b93bfefbdbe8ae3094c95bd63cf0c0416ba9835712bcfbc9f24cde2e379b"} Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.881602 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s5xxd" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.893899 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.907897 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:43 crc kubenswrapper[4874]: E0122 11:42:43.908230 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.4082196 +0000 UTC m=+138.253290670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.920467 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.923227 4874 csr.go:261] certificate signing request csr-x4fcz is approved, waiting to be issued Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.929048 4874 csr.go:257] certificate signing request csr-x4fcz is issued Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.952605 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" Jan 22 11:42:43 crc kubenswrapper[4874]: I0122 11:42:43.962314 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.014167 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.015426 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.515411459 +0000 UTC m=+138.360482529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.116314 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.116850 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.616827437 +0000 UTC m=+138.461898507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.202943 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" podStartSLOduration=120.202921591 podStartE2EDuration="2m0.202921591s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:44.201667242 +0000 UTC m=+138.046738402" watchObservedRunningTime="2026-01-22 11:42:44.202921591 +0000 UTC m=+138.047992651" Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.218053 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.218465 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.718424931 +0000 UTC m=+138.563496011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.218491 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.218756 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.718749571 +0000 UTC m=+138.563820641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.233953 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.319617 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.320139 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.820124567 +0000 UTC m=+138.665195637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.349599 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ssbbs" podStartSLOduration=120.349572226 podStartE2EDuration="2m0.349572226s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:44.347994235 +0000 UTC m=+138.193065305" watchObservedRunningTime="2026-01-22 11:42:44.349572226 +0000 UTC m=+138.194643296" Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.424115 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.424656 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:44.924642312 +0000 UTC m=+138.769713382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.443966 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:44 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:44 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:44 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.444017 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.526165 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.526325 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.026296768 +0000 UTC m=+138.871367838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.526389 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.526700 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.02669143 +0000 UTC m=+138.871762560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.587981 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wmwcz" podStartSLOduration=119.587960212 podStartE2EDuration="1m59.587960212s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:44.539189564 +0000 UTC m=+138.384260634" watchObservedRunningTime="2026-01-22 11:42:44.587960212 +0000 UTC m=+138.433031272" Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.627336 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.627537 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.127509479 +0000 UTC m=+138.972580559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.629011 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.629342 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.129330716 +0000 UTC m=+138.974401786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.661491 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-869fn" podStartSLOduration=120.661469989 podStartE2EDuration="2m0.661469989s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:44.661317835 +0000 UTC m=+138.506388925" watchObservedRunningTime="2026-01-22 11:42:44.661469989 +0000 UTC m=+138.506541069" Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.706419 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5cczs" podStartSLOduration=120.706368355 podStartE2EDuration="2m0.706368355s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:44.703970609 +0000 UTC m=+138.549041679" watchObservedRunningTime="2026-01-22 11:42:44.706368355 +0000 UTC m=+138.551439425" Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.730483 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.730723 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.230691942 +0000 UTC m=+139.075763012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.730817 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.731338 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.231330583 +0000 UTC m=+139.076401653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.832628 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.832833 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.332800221 +0000 UTC m=+139.177871291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.832920 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.833378 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.333370519 +0000 UTC m=+139.178441589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.883442 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" event={"ID":"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a","Type":"ContainerStarted","Data":"3826d56a62c5fd00e3561a10b809db8a4a5499338a4f0cc48cdc80320d70af51"} Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.884329 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" event={"ID":"9fd4241f-b523-4d66-bcdb-c3bb691765c9","Type":"ContainerStarted","Data":"ebdd6071b5dc3f0f50c15dd7a89f7157d85c2d83e8eb328a3a8f9e41e0ceff64"} Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.885541 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mftbv" event={"ID":"8a67b9a8-ad8a-40e3-955c-53aed07a9140","Type":"ContainerStarted","Data":"f0c2f029d63ce8d982a1f81d10a536666090b310e7796537ca9f84d3c4a28a3a"} Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.889241 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jgw4c" event={"ID":"3b9983ba-ed8d-4654-ba74-f25433aa7ee7","Type":"ContainerStarted","Data":"5b427dcc0615916e55edeb38400ee108b23bccc5cf915114cd4663a0326595ae"} Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.930695 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 11:37:43 +0000 UTC, rotation deadline is 2026-11-27 11:04:23.203483956 +0000 UTC Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.930740 4874 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7415h21m38.272747277s for next certificate rotation Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.934335 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.934632 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.434619872 +0000 UTC m=+139.279690942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:44 crc kubenswrapper[4874]: I0122 11:42:44.934731 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:44 crc kubenswrapper[4874]: E0122 11:42:44.935045 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.435037475 +0000 UTC m=+139.280108545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.036845 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.037992 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.537977001 +0000 UTC m=+139.383048071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.138341 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.138903 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.638883742 +0000 UTC m=+139.483954812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.241051 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.241326 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.741302841 +0000 UTC m=+139.586373911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.244432 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:45 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:45 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:45 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.244483 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.254851 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7h7c7" podStartSLOduration=121.254829038 podStartE2EDuration="2m1.254829038s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:45.206469353 +0000 UTC m=+139.051540413" watchObservedRunningTime="2026-01-22 11:42:45.254829038 +0000 UTC m=+139.099900118" Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.342631 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.343225 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.843197624 +0000 UTC m=+139.688268694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.357504 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wws2s"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.384930 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.418854 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.451161 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.451638 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:45.951615112 +0000 UTC m=+139.796686182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.463111 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2"] Jan 22 11:42:45 crc kubenswrapper[4874]: W0122 11:42:45.518990 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd80367da_a8d4_4b76_9b33_5526aad85229.slice/crio-aa7fd2628587b5bf777ed91fb7b1e43479d8ee301c2b3642663b5af0e306946b WatchSource:0}: Error finding container aa7fd2628587b5bf777ed91fb7b1e43479d8ee301c2b3642663b5af0e306946b: Status 404 returned error can't find the container with id aa7fd2628587b5bf777ed91fb7b1e43479d8ee301c2b3642663b5af0e306946b Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.552483 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.552893 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.052881165 +0000 UTC m=+139.897952225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.654125 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.654749 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.154725917 +0000 UTC m=+139.999796987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.739314 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sjhjk"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.745049 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6dvvn"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.755761 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.756719 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.256693851 +0000 UTC m=+140.101764921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.769363 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.788379 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc"] Jan 22 11:42:45 crc kubenswrapper[4874]: W0122 11:42:45.795619 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4370babd_21ea_4c7e_81b9_cdd611127094.slice/crio-76be14ef4c33b248a1626a35b1c81885884c2b29b18db891810d8b4924480d1f WatchSource:0}: Error finding container 76be14ef4c33b248a1626a35b1c81885884c2b29b18db891810d8b4924480d1f: Status 404 returned error can't find the container with id 76be14ef4c33b248a1626a35b1c81885884c2b29b18db891810d8b4924480d1f Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.820086 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rx6nv"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.839657 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s5xxd"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.855984 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.858910 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.859362 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.359346349 +0000 UTC m=+140.204417419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.872134 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.888116 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.888998 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f64ps"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.921602 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv" event={"ID":"304c66b8-6187-47ac-9c57-235c634eaae4","Type":"ContainerStarted","Data":"e56f4c7d4db0bae15a14fc7b2128ccfcbc9d52b63b70f91553c634c3854c88f2"} Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.945440 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.948415 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" event={"ID":"c045a558-1990-432e-9965-e918f60aba14","Type":"ContainerStarted","Data":"08ce10fae4bc524ea5d487aa15edc7a8c96be9cf3039bbe3b8438502f610d1c1"} Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.951047 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wws2s" event={"ID":"2dad3db6-cddd-457d-8efa-908257ef7cc5","Type":"ContainerStarted","Data":"eb821e05f105388a16d1f483490ba591cece2cfd3ac61a1d7e8d081cfcb2eed3"} Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.951082 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wws2s" event={"ID":"2dad3db6-cddd-457d-8efa-908257ef7cc5","Type":"ContainerStarted","Data":"65a9eee8d0422a080a2e9b23d35a87bc7b70cf7c5c56b2bd7476e73873a4352e"} Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.961517 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6"] Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.962082 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:45 crc kubenswrapper[4874]: E0122 11:42:45.962556 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.462534592 +0000 UTC m=+140.307605662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:45 crc kubenswrapper[4874]: I0122 11:42:45.989265 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm"] Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:45.997023 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m"] Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.001652 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx"] Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.008948 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jgw4c" event={"ID":"3b9983ba-ed8d-4654-ba74-f25433aa7ee7","Type":"ContainerStarted","Data":"9657bc5011b0e6d3cba42388e839ce8c1ef84788483d8f44ea92cb538f0db28d"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.009175 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l87tw"] Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.009206 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jgw4c" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.021683 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" event={"ID":"213d34f5-75cd-459c-9e56-2938fe5e3950","Type":"ContainerStarted","Data":"e55fae314e5ccbaff31d4019382891789d84d2a8792b6ac6cab4bdcc20e97e1e"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.021717 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-jgw4c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.021830 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jgw4c" podUID="3b9983ba-ed8d-4654-ba74-f25433aa7ee7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.033667 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cqkbr"] Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.039423 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4"] Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.039466 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" event={"ID":"88c9b59f-1809-4252-a8fc-cad965848dc0","Type":"ContainerStarted","Data":"3ecbbc38ae38ecf2ffe8c819f8e71acbf0b4c3670a1bdce34ce15e80af22e2cf"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.039491 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" event={"ID":"88c9b59f-1809-4252-a8fc-cad965848dc0","Type":"ContainerStarted","Data":"bd0be220935c39fe411c3f192ea4860e097c146a3b74e883c05c2b278e2ae44e"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.058658 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s5xxd" event={"ID":"a54b65f9-405b-4a90-a9eb-e0438722ffa8","Type":"ContainerStarted","Data":"908285d52bb7b2261e2614aca0390cbf755acb3b0e494795250ea03447cbb38a"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.063148 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.065018 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.564995102 +0000 UTC m=+140.410066212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.071169 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2"] Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.071283 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kxvzk"] Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.073560 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dwcjx"] Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.074379 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rwcrv" podStartSLOduration=122.074357078 podStartE2EDuration="2m2.074357078s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:45.986331102 +0000 UTC m=+139.831402182" watchObservedRunningTime="2026-01-22 11:42:46.074357078 +0000 UTC m=+139.919428148" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.074736 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" event={"ID":"16747491-2b7d-4cb1-841d-61c6f366cf8a","Type":"ContainerStarted","Data":"71a3b365a76cdda3ccbd96b1bfcbd8fc5a75457ba1b881350b13e9a0f2656599"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.074814 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.090483 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wws2s" podStartSLOduration=122.090460975 podStartE2EDuration="2m2.090460975s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.033509599 +0000 UTC m=+139.878580669" watchObservedRunningTime="2026-01-22 11:42:46.090460975 +0000 UTC m=+139.935532065" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.115648 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" event={"ID":"308c337e-1e28-4e34-9ccb-8ae546eee089","Type":"ContainerStarted","Data":"be463a8baa10fb0031e37d1e347f370601b34a7e60a08b7f71e1880c31d503a3"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.128521 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dprzc" podStartSLOduration=121.128494094 podStartE2EDuration="2m1.128494094s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.064687022 +0000 UTC m=+139.909758102" watchObservedRunningTime="2026-01-22 11:42:46.128494094 +0000 UTC m=+139.973565164" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.129712 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jgw4c" podStartSLOduration=122.129705853 podStartE2EDuration="2m2.129705853s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.111097976 +0000 UTC m=+139.956169036" watchObservedRunningTime="2026-01-22 11:42:46.129705853 +0000 UTC m=+139.974776933" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.149601 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" event={"ID":"4370babd-21ea-4c7e-81b9-cdd611127094","Type":"ContainerStarted","Data":"76be14ef4c33b248a1626a35b1c81885884c2b29b18db891810d8b4924480d1f"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.151684 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" podStartSLOduration=122.151667525 podStartE2EDuration="2m2.151667525s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.1499197 +0000 UTC m=+139.994990780" watchObservedRunningTime="2026-01-22 11:42:46.151667525 +0000 UTC m=+139.996738595" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.168828 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" event={"ID":"9fd4241f-b523-4d66-bcdb-c3bb691765c9","Type":"ContainerStarted","Data":"0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.169827 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.169949 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.170531 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.670518659 +0000 UTC m=+140.515589809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.183537 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" event={"ID":"4ee5bb84-2932-4f1c-8af7-39e10c1260c2","Type":"ContainerStarted","Data":"487cdb5fd2da997d6662b515aa393cd39aee1d829581faded7a8209d4fd7ea1f"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.184368 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qfcsk" podStartSLOduration=122.184350965 podStartE2EDuration="2m2.184350965s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.183382475 +0000 UTC m=+140.028453545" watchObservedRunningTime="2026-01-22 11:42:46.184350965 +0000 UTC m=+140.029422035" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.195363 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" event={"ID":"51b2c452-70bd-4a3e-bd74-c7cd6513fd45","Type":"ContainerStarted","Data":"ac795fc6eacd1d28ef567fbec006e021c43ea9f8710edbc2ea7f75db77c4337c"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.197587 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" event={"ID":"1cc3bb5c-983c-4a20-bbb1-2bb0bd24cd5a","Type":"ContainerStarted","Data":"52cecb435816c6b6693584d7305d4bc7b0c6e9547728c405a0915d37d36f1e64"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.198707 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.210959 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cfzn7" event={"ID":"061a736b-3663-4a73-9836-19724fc4bb71","Type":"ContainerStarted","Data":"51ab62069e8e04efad5817dfc014b1805c7af17e23c9c681963c871384a4a77c"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.235291 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" podStartSLOduration=122.23525133 podStartE2EDuration="2m2.23525133s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.233823156 +0000 UTC m=+140.078894236" watchObservedRunningTime="2026-01-22 11:42:46.23525133 +0000 UTC m=+140.080322400" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.241931 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" event={"ID":"ca44490a-7fc8-478d-a6e2-670f49816b81","Type":"ContainerStarted","Data":"636e474851c2358938ca9c20c5d6b3952ed505125743dd5a92b549bbb459369a"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.244340 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:46 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:46 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:46 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.244387 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.251771 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mftbv" event={"ID":"8a67b9a8-ad8a-40e3-955c-53aed07a9140","Type":"ContainerStarted","Data":"b16db8afdc1fe803ec4b2fa8f1b30642d382469740f350302bd35307c74b82ad"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.252417 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.263431 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" event={"ID":"d80367da-a8d4-4b76-9b33-5526aad85229","Type":"ContainerStarted","Data":"aa7fd2628587b5bf777ed91fb7b1e43479d8ee301c2b3642663b5af0e306946b"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.263718 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" podStartSLOduration=121.263694997 podStartE2EDuration="2m1.263694997s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.262884281 +0000 UTC m=+140.107955351" watchObservedRunningTime="2026-01-22 11:42:46.263694997 +0000 UTC m=+140.108766067" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.271406 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.272732 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.772716752 +0000 UTC m=+140.617787822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.276598 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" event={"ID":"68dcdc32-485b-435c-81cc-43be463998bb","Type":"ContainerStarted","Data":"e85515ccbecf22ae2317d95af2bacdfd7374fc2ea6b1a492248ab99963ad1934"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.276650 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" event={"ID":"68dcdc32-485b-435c-81cc-43be463998bb","Type":"ContainerStarted","Data":"510bfb39c5dbd64db543d1bf41a991a9cd9ebe08f2eb0c92a25a2e8f9ee5a810"} Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.316904 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" podStartSLOduration=122.316868704 podStartE2EDuration="2m2.316868704s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.315832102 +0000 UTC m=+140.160903182" watchObservedRunningTime="2026-01-22 11:42:46.316868704 +0000 UTC m=+140.161939774" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.317899 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cfzn7" podStartSLOduration=6.317882706 podStartE2EDuration="6.317882706s" podCreationTimestamp="2026-01-22 11:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.296991957 +0000 UTC m=+140.142063027" watchObservedRunningTime="2026-01-22 11:42:46.317882706 +0000 UTC m=+140.162953776" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.348318 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" podStartSLOduration=121.348301545 podStartE2EDuration="2m1.348301545s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.344745093 +0000 UTC m=+140.189816183" watchObservedRunningTime="2026-01-22 11:42:46.348301545 +0000 UTC m=+140.193372615" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.376239 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dnrhx" podStartSLOduration=122.376211295 podStartE2EDuration="2m2.376211295s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.375813862 +0000 UTC m=+140.220884932" watchObservedRunningTime="2026-01-22 11:42:46.376211295 +0000 UTC m=+140.221282365" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.376307 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.384414 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.884378062 +0000 UTC m=+140.729449132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.423983 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mftbv" podStartSLOduration=122.423964011 podStartE2EDuration="2m2.423964011s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:46.421588346 +0000 UTC m=+140.266659416" watchObservedRunningTime="2026-01-22 11:42:46.423964011 +0000 UTC m=+140.269035081" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.477548 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.477982 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:46.977961914 +0000 UTC m=+140.823032994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.583139 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.583541 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:47.083525182 +0000 UTC m=+140.928596252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.686216 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.686687 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:47.186653863 +0000 UTC m=+141.031724923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.789579 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.790228 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:47.290216678 +0000 UTC m=+141.135287758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.866493 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.866899 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.880956 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6pr5c" Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.890847 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.891279 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:47.391258694 +0000 UTC m=+141.236329774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:46 crc kubenswrapper[4874]: I0122 11:42:46.992481 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:46 crc kubenswrapper[4874]: E0122 11:42:46.992908 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:47.492891219 +0000 UTC m=+141.337962289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.102439 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.102988 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:47.602967229 +0000 UTC m=+141.448038299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.170514 4874 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8j9tz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.170588 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" podUID="9fd4241f-b523-4d66-bcdb-c3bb691765c9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.207034 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.228493 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:47.728475997 +0000 UTC m=+141.573547067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.236738 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:47 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:47 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:47 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.236831 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.253678 4874 patch_prober.go:28] interesting pod/console-operator-58897d9998-mftbv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.253930 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mftbv" podUID="8a67b9a8-ad8a-40e3-955c-53aed07a9140" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.294575 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.298352 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" event={"ID":"88c9b59f-1809-4252-a8fc-cad965848dc0","Type":"ContainerStarted","Data":"647380be5685fa6b50f7f792740b85f949e8f14590a8e512665afee4f0b8f232"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.304323 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" event={"ID":"8d7ccd8e-549f-4b23-bcec-4a1c14e10478","Type":"ContainerStarted","Data":"8abe32a851225fd5d1922fb4bae72a41eee4fece3bf7728f299ea9031439b8ed"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.304365 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" event={"ID":"8d7ccd8e-549f-4b23-bcec-4a1c14e10478","Type":"ContainerStarted","Data":"3eb03704800aef4b382d689fe70eb2381222358dad2e910378a0a21f7f51d89c"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.314451 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.314840 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:47.814825139 +0000 UTC m=+141.659896209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.326562 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" event={"ID":"67454032-f4d2-418e-afad-a48c0a80007d","Type":"ContainerStarted","Data":"df3978feba47cd340cd15446fb8141642ed468d9027e1ea9b0efb6ffb5809622"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.326679 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" event={"ID":"67454032-f4d2-418e-afad-a48c0a80007d","Type":"ContainerStarted","Data":"38bfbedd5752105bf5782fa31806e1ec9b484b61fdf5b89b62091b2355d9e701"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.330913 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" event={"ID":"4f61e177-ea8e-4f61-8ba2-67906f08a00c","Type":"ContainerStarted","Data":"558b2d28a2ab0e3973110f8b8a82c613077883521bfb47a8a2b98b587b2161ce"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.334926 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" event={"ID":"0a15050a-7cbd-40f6-a656-a68293c0878a","Type":"ContainerStarted","Data":"059406f38b7eefe033761eb54f07a9cd0f802e07f88ff68ccbdb582fae9d8d28"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.334963 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" event={"ID":"0a15050a-7cbd-40f6-a656-a68293c0878a","Type":"ContainerStarted","Data":"c99ae11d334a79b6695ceefd2cce7368234bf383f78e3abddc229aa3df93fe78"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.337422 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" event={"ID":"51b2c452-70bd-4a3e-bd74-c7cd6513fd45","Type":"ContainerStarted","Data":"cd1de5446be5dd102373100e35d09ee8ab1132fadaf45b1ae27a4679de5f15b0"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.341272 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" event={"ID":"8b5afeda-c180-4aa9-a831-a0840d495fa8","Type":"ContainerStarted","Data":"a8a6f08c614f9afd8df854c6dbc31398112a667166fd12bd4926f9e1ab58cbc0"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.341305 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" event={"ID":"8b5afeda-c180-4aa9-a831-a0840d495fa8","Type":"ContainerStarted","Data":"98a3c21d4aff6ce8285894f5ee62debaac34c5731c09beac5a15c46753df56eb"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.342077 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.347769 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" event={"ID":"65f71c2e-ab34-4d33-905f-609555dab78c","Type":"ContainerStarted","Data":"84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.348142 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" event={"ID":"65f71c2e-ab34-4d33-905f-609555dab78c","Type":"ContainerStarted","Data":"a9c651399a50d57c614df95905597a481669874ba9ca8205cf5d6198908fbe7c"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.348169 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.351087 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" event={"ID":"f745af09-93ed-4c94-8411-f14b0aaaf1cf","Type":"ContainerStarted","Data":"e0c8fda880a7d1c41dcfeb90cb150869d269ef1f7a3b5eb0bae412e64024a346"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.351124 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" event={"ID":"f745af09-93ed-4c94-8411-f14b0aaaf1cf","Type":"ContainerStarted","Data":"acd01a69492923986ce95bf35e608599514c2e340da10f8fc96437bf69c498a4"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.352773 4874 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dt8w6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.352817 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" podUID="8b5afeda-c180-4aa9-a831-a0840d495fa8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.353068 4874 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2bn87 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.353105 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" podUID="65f71c2e-ab34-4d33-905f-609555dab78c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.360335 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kxvzk" event={"ID":"41a97907-0515-44cd-b6a4-87463a9b3819","Type":"ContainerStarted","Data":"64fc50e2df0fd1b74488e28f23047834bb85e5066ec5debf1bf297a69177c9bd"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.370984 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" event={"ID":"4ee5bb84-2932-4f1c-8af7-39e10c1260c2","Type":"ContainerStarted","Data":"fdc414e4a722957272fbe2e844434bbeab67b1a4d2e7f67709f7f840c0fadd59"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.374807 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" event={"ID":"a3d1224e-a43e-4dca-8ad0-239dc50b6d58","Type":"ContainerStarted","Data":"99e494f37f7ff1730b693230e1cbf965963ea9b1694d43f3655c495b87182e96"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.374854 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" event={"ID":"a3d1224e-a43e-4dca-8ad0-239dc50b6d58","Type":"ContainerStarted","Data":"1abeec4079be20567280eedb982b90bb26126680ec7f316575371854a41abc94"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.415642 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s5xxd" event={"ID":"a54b65f9-405b-4a90-a9eb-e0438722ffa8","Type":"ContainerStarted","Data":"e053f4fe36e9cf751669f7b77ee2f81f3c10d8d96e7ee32d01620e32539d1ee6"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.415682 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.415929 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:47.915917547 +0000 UTC m=+141.760988607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.438606 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpjnl" podStartSLOduration=123.438579351 podStartE2EDuration="2m3.438579351s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.436197466 +0000 UTC m=+141.281268546" watchObservedRunningTime="2026-01-22 11:42:47.438579351 +0000 UTC m=+141.283650421" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.440603 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" event={"ID":"4370babd-21ea-4c7e-81b9-cdd611127094","Type":"ContainerStarted","Data":"e9d4fd6d2d34a9ae6171b6295d157abe4e467646bf8eab5eaeda4c6c84d3e4ea"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.456708 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" event={"ID":"574225b6-a476-4591-9be9-ddd94e5281ef","Type":"ContainerStarted","Data":"8f66be03fbd08062b606ad0121719d2dd562e0fa71df10028a45ad92e295ecf2"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.456750 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" event={"ID":"574225b6-a476-4591-9be9-ddd94e5281ef","Type":"ContainerStarted","Data":"7b30c3eda188977dd3f3fc4a6dd413c05a8e5c43f35ad2de13267b93fe586d76"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.457635 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.468610 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" event={"ID":"288e6358-c74b-4597-8968-726a31365f82","Type":"ContainerStarted","Data":"6bd7be9aa27d666be381a7fd11c421fab5dcf5107ce195055ac7e95d943119f5"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.468780 4874 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-b6c5m container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.468835 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" podUID="574225b6-a476-4591-9be9-ddd94e5281ef" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.473934 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" event={"ID":"f87cb481-5261-4028-99e2-cd57ff6b61e1","Type":"ContainerStarted","Data":"ee99f08d20b4163a0b446d289abf922985a606f83627f0a4c1a459ac62475a83"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.486820 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" event={"ID":"213d34f5-75cd-459c-9e56-2938fe5e3950","Type":"ContainerStarted","Data":"a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.487303 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.494975 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" podStartSLOduration=122.494961998 podStartE2EDuration="2m2.494961998s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.469499736 +0000 UTC m=+141.314570806" watchObservedRunningTime="2026-01-22 11:42:47.494961998 +0000 UTC m=+141.340033058" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.503554 4874 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6dvvn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.503622 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.519325 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.519699 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6lgxn" podStartSLOduration=122.519685759 podStartE2EDuration="2m2.519685759s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.493857564 +0000 UTC m=+141.338928634" watchObservedRunningTime="2026-01-22 11:42:47.519685759 +0000 UTC m=+141.364756839" Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.520496 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.020485013 +0000 UTC m=+141.865556083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.521751 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmhx8" podStartSLOduration=123.521740063 podStartE2EDuration="2m3.521740063s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.518812531 +0000 UTC m=+141.363883601" watchObservedRunningTime="2026-01-22 11:42:47.521740063 +0000 UTC m=+141.366811133" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.548252 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" event={"ID":"a785e748-a557-4c24-8a4d-dc03cfc3c357","Type":"ContainerStarted","Data":"8a7e18cdf85973b3ea4e51ff8b10e08605086cbc0498e73cb8e3eb57f83837e6"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.548505 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" event={"ID":"a785e748-a557-4c24-8a4d-dc03cfc3c357","Type":"ContainerStarted","Data":"b7ee0f5f651238d901bddc9fc4a5a84a9fa512ea93cb44bd4d6c58d7ba62b5ae"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.581598 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vsxx4" podStartSLOduration=123.581578199 podStartE2EDuration="2m3.581578199s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.543037384 +0000 UTC m=+141.388108454" watchObservedRunningTime="2026-01-22 11:42:47.581578199 +0000 UTC m=+141.426649269" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.581995 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6zpzc" podStartSLOduration=122.581991283 podStartE2EDuration="2m2.581991283s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.580481095 +0000 UTC m=+141.425552165" watchObservedRunningTime="2026-01-22 11:42:47.581991283 +0000 UTC m=+141.427062353" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.593244 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" event={"ID":"1795c220-db74-434e-9111-917ff6d95077","Type":"ContainerStarted","Data":"afefa41de116e68199e922f05b1e872d5782f38e366fdcfa816791519a8ca510"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.593433 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" event={"ID":"1795c220-db74-434e-9111-917ff6d95077","Type":"ContainerStarted","Data":"e17af57e0f59e83ba770c3db737189f5369bc87bbd276ea5eb99f256b0202cd7"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.607660 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" event={"ID":"6a39a4e6-a146-4ff3-8e90-eb516fe27e38","Type":"ContainerStarted","Data":"295b2c7f753febed4403ab96ad5c95131d05455fa1f58fbc53b3147f90ba7b5b"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.607697 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" event={"ID":"6a39a4e6-a146-4ff3-8e90-eb516fe27e38","Type":"ContainerStarted","Data":"0be9d83f8aee7488de8a04bf682b44f0aa9194a9f62592e763949214d15c368b"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.614831 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" podStartSLOduration=122.614814528 podStartE2EDuration="2m2.614814528s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.613553218 +0000 UTC m=+141.458624278" watchObservedRunningTime="2026-01-22 11:42:47.614814528 +0000 UTC m=+141.459885598" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.623724 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.626204 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.126187516 +0000 UTC m=+141.971258586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.631792 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l87tw" event={"ID":"7a9c3919-d376-4655-818e-3a88a8c0b883","Type":"ContainerStarted","Data":"d814de8e63d732aeae1c15bab08a600f371f9751ea8d100cd5ddd91e02fd32ed"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.654793 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" podStartSLOduration=122.654774847 podStartE2EDuration="2m2.654774847s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.651735362 +0000 UTC m=+141.496806422" watchObservedRunningTime="2026-01-22 11:42:47.654774847 +0000 UTC m=+141.499845917" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.666503 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s4fb2" event={"ID":"d80367da-a8d4-4b76-9b33-5526aad85229","Type":"ContainerStarted","Data":"416bed418a7c55213c0f7dd0edc12db3eb1e52d960bc08ec7924166a1b9043bc"} Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.677000 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-jgw4c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.677054 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jgw4c" podUID="3b9983ba-ed8d-4654-ba74-f25433aa7ee7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.691149 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.697610 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fl8nd" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.715174 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mftbv" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.747611 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.761993 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.261967767 +0000 UTC m=+142.107038837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.822920 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" podStartSLOduration=122.822895438 podStartE2EDuration="2m2.822895438s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.822520526 +0000 UTC m=+141.667591606" watchObservedRunningTime="2026-01-22 11:42:47.822895438 +0000 UTC m=+141.667966508" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.859901 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.866091 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.36607274 +0000 UTC m=+142.211143810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.941340 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-s5xxd" podStartSLOduration=7.941315682 podStartE2EDuration="7.941315682s" podCreationTimestamp="2026-01-22 11:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:47.938628438 +0000 UTC m=+141.783699508" watchObservedRunningTime="2026-01-22 11:42:47.941315682 +0000 UTC m=+141.786386762" Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.963537 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.963868 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.463846433 +0000 UTC m=+142.308917503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:47 crc kubenswrapper[4874]: I0122 11:42:47.963940 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:47 crc kubenswrapper[4874]: E0122 11:42:47.964827 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.464806883 +0000 UTC m=+142.309877953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.044821 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-dwcjx" podStartSLOduration=123.044803365 podStartE2EDuration="2m3.044803365s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:48.042001717 +0000 UTC m=+141.887072787" watchObservedRunningTime="2026-01-22 11:42:48.044803365 +0000 UTC m=+141.889874435" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.064738 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.065068 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.565051784 +0000 UTC m=+142.410122854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.145146 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6jqjk" podStartSLOduration=123.145132308 podStartE2EDuration="2m3.145132308s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:48.144350304 +0000 UTC m=+141.989421374" watchObservedRunningTime="2026-01-22 11:42:48.145132308 +0000 UTC m=+141.990203378" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.145788 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sjhjk" podStartSLOduration=124.145783559 podStartE2EDuration="2m4.145783559s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:48.113599264 +0000 UTC m=+141.958670344" watchObservedRunningTime="2026-01-22 11:42:48.145783559 +0000 UTC m=+141.990854629" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.166645 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.167126 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.667110461 +0000 UTC m=+142.512181531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.203609 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" podStartSLOduration=123.203589481 podStartE2EDuration="2m3.203589481s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:48.19274612 +0000 UTC m=+142.037817190" watchObservedRunningTime="2026-01-22 11:42:48.203589481 +0000 UTC m=+142.048660551" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.244626 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:48 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:48 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:48 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.245212 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.267709 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.268095 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.768080755 +0000 UTC m=+142.613151825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.369544 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.369966 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.869955147 +0000 UTC m=+142.715026217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.471001 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.471314 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.971283412 +0000 UTC m=+142.816354482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.471608 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.472036 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:48.972021185 +0000 UTC m=+142.817092255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.572993 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.573418 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.073376971 +0000 UTC m=+142.918448051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.672327 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" event={"ID":"4f61e177-ea8e-4f61-8ba2-67906f08a00c","Type":"ContainerStarted","Data":"e2ad4ec597599d88f96fd683a147efd30f859f39b2241cce51d81aefd182a0d0"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.672371 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" event={"ID":"4f61e177-ea8e-4f61-8ba2-67906f08a00c","Type":"ContainerStarted","Data":"edd0852932e3487ad47d142c96cedcef6f74c46c481681268729bdeff6368ae4"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.674567 4874 generic.go:334] "Generic (PLEG): container finished" podID="288e6358-c74b-4597-8968-726a31365f82" containerID="e768dbe1f074a94bc522533a48c9c56cdc2451138e0425b58698e0fefde2a767" exitCode=0 Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.674653 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" event={"ID":"288e6358-c74b-4597-8968-726a31365f82","Type":"ContainerDied","Data":"e768dbe1f074a94bc522533a48c9c56cdc2451138e0425b58698e0fefde2a767"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.676034 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.676415 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.176383209 +0000 UTC m=+143.021454279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.676592 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" event={"ID":"f87cb481-5261-4028-99e2-cd57ff6b61e1","Type":"ContainerStarted","Data":"db2bcd42d1984929d94787587108e648f04aeadd797d990928b45e142f5e0fed"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.682778 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kxvzk" event={"ID":"41a97907-0515-44cd-b6a4-87463a9b3819","Type":"ContainerStarted","Data":"a5103129246337de5be9e1e278501f97850bb06e8547497811d03ce40d6aebca"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.682820 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kxvzk" event={"ID":"41a97907-0515-44cd-b6a4-87463a9b3819","Type":"ContainerStarted","Data":"969bbedcf0c44414a0fb98fdd634b6bec509293125e4834ca9edd7247b104f5a"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.682987 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.688796 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" event={"ID":"1795c220-db74-434e-9111-917ff6d95077","Type":"ContainerStarted","Data":"9e86d6331e2e5bfe460602c5aec3829a0d20d45fbf3f2f0800246076bb46d1fb"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.693241 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l87tw" event={"ID":"7a9c3919-d376-4655-818e-3a88a8c0b883","Type":"ContainerStarted","Data":"517858dad1bddae80dce4e4109465357192315c04012a6485f483b33bc958c8a"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.698300 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rx6nv" podStartSLOduration=124.69828334 podStartE2EDuration="2m4.69828334s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:48.695916274 +0000 UTC m=+142.540987334" watchObservedRunningTime="2026-01-22 11:42:48.69828334 +0000 UTC m=+142.543354420" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.703654 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" event={"ID":"8d7ccd8e-549f-4b23-bcec-4a1c14e10478","Type":"ContainerStarted","Data":"99b89d43849311e9d542317356dd7454d7f1912dc8012e16fc9ff5ad2f45234b"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.703907 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.709675 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" event={"ID":"67454032-f4d2-418e-afad-a48c0a80007d","Type":"ContainerStarted","Data":"fcf228c9117a38a0da1eb2447ee8dd31ed6ace0a705bd6cf118df239b6c7105e"} Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.710962 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-jgw4c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.711011 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jgw4c" podUID="3b9983ba-ed8d-4654-ba74-f25433aa7ee7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.711708 4874 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6dvvn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.711754 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.732039 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-f64ps" podStartSLOduration=123.732021143 podStartE2EDuration="2m3.732021143s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:48.730574428 +0000 UTC m=+142.575645508" watchObservedRunningTime="2026-01-22 11:42:48.732021143 +0000 UTC m=+142.577092213" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.750061 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.750099 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dt8w6" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.750130 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-b6c5m" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.776951 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.777188 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.277130245 +0000 UTC m=+143.122201315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.777451 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.778784 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.278775307 +0000 UTC m=+143.123846377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.807173 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gsfnm" podStartSLOduration=124.807151212 podStartE2EDuration="2m4.807151212s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:48.806864513 +0000 UTC m=+142.651935603" watchObservedRunningTime="2026-01-22 11:42:48.807151212 +0000 UTC m=+142.652222282" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.840548 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kxvzk" podStartSLOduration=8.840526195 podStartE2EDuration="8.840526195s" podCreationTimestamp="2026-01-22 11:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:48.834103392 +0000 UTC m=+142.679174462" watchObservedRunningTime="2026-01-22 11:42:48.840526195 +0000 UTC m=+142.685597275" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.880597 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gv9x5" Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.882157 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.887815 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.387774714 +0000 UTC m=+143.232845784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.988184 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:48 crc kubenswrapper[4874]: E0122 11:42:48.989085 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.489072168 +0000 UTC m=+143.334143238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:48 crc kubenswrapper[4874]: I0122 11:42:48.990703 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-89ldw" podStartSLOduration=124.990674068 podStartE2EDuration="2m4.990674068s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:48.987358094 +0000 UTC m=+142.832429164" watchObservedRunningTime="2026-01-22 11:42:48.990674068 +0000 UTC m=+142.835745148" Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.017871 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" podStartSLOduration=124.017847545 podStartE2EDuration="2m4.017847545s" podCreationTimestamp="2026-01-22 11:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:49.014709246 +0000 UTC m=+142.859780316" watchObservedRunningTime="2026-01-22 11:42:49.017847545 +0000 UTC m=+142.862918625" Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.089357 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.089573 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.589543996 +0000 UTC m=+143.434615066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.089815 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.090110 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.590099574 +0000 UTC m=+143.435170644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.190794 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.191377 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.691362106 +0000 UTC m=+143.536433176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.243835 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:49 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:49 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:49 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.243889 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.292923 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.293672 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.793655291 +0000 UTC m=+143.638726351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.394776 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.394921 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.894899074 +0000 UTC m=+143.739970154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.395161 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.395557 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.895546654 +0000 UTC m=+143.740617724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.496676 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.496861 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.996830277 +0000 UTC m=+143.841901347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.496942 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.497524 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:49.997509269 +0000 UTC m=+143.842580339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.598039 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.598241 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:50.098212914 +0000 UTC m=+143.943283984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.598529 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.598913 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:50.098899626 +0000 UTC m=+143.943970696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.661475 4874 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.699497 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.699826 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:50.199810828 +0000 UTC m=+144.044881898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.714700 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l87tw" event={"ID":"7a9c3919-d376-4655-818e-3a88a8c0b883","Type":"ContainerStarted","Data":"4a38d036b900efde16ecd783e642919b32ccb03f0034722b91a174d22da7d67e"} Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.714745 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l87tw" event={"ID":"7a9c3919-d376-4655-818e-3a88a8c0b883","Type":"ContainerStarted","Data":"0cd6eec0f86dfdb5ce52d515d3ada676af7aaac76c0a55e70afa6045c1ecc6b0"} Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.719167 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" event={"ID":"288e6358-c74b-4597-8968-726a31365f82","Type":"ContainerStarted","Data":"1fe1efdd8b19b54cbe4b074c9d510c5de89756de0dabe35ff3a33d6d3b8c34db"} Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.719198 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" event={"ID":"288e6358-c74b-4597-8968-726a31365f82","Type":"ContainerStarted","Data":"0382a03db8e05c46b9dbc58b391e1572cd8dd298eda6837a0b9c11a6730a974b"} Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.741463 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" podStartSLOduration=125.74144348 podStartE2EDuration="2m5.74144348s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:49.740528821 +0000 UTC m=+143.585599891" watchObservedRunningTime="2026-01-22 11:42:49.74144348 +0000 UTC m=+143.586514550" Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.802036 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.803497 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:42:50.303478586 +0000 UTC m=+144.148549656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jg4wj" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.816037 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bk5qd"] Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.817058 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.820892 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.828708 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk5qd"] Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.890583 4874 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T11:42:49.661712986Z","Handler":null,"Name":""} Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.903626 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.903893 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tqj\" (UniqueName: \"kubernetes.io/projected/829e346d-eb89-4705-83c4-99d02fca8971-kube-api-access-k9tqj\") pod \"community-operators-bk5qd\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.903925 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-catalog-content\") pod \"community-operators-bk5qd\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.903955 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-utilities\") pod \"community-operators-bk5qd\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:49 crc kubenswrapper[4874]: E0122 11:42:49.904118 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 11:42:50.404090018 +0000 UTC m=+144.249161088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.923434 4874 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 11:42:49 crc kubenswrapper[4874]: I0122 11:42:49.923665 4874 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.003512 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r2mwt"] Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.004760 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.005154 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-catalog-content\") pod \"community-operators-bk5qd\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.005264 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-utilities\") pod \"community-operators-bk5qd\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.005409 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.005555 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9tqj\" (UniqueName: \"kubernetes.io/projected/829e346d-eb89-4705-83c4-99d02fca8971-kube-api-access-k9tqj\") pod \"community-operators-bk5qd\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.006293 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-utilities\") pod \"community-operators-bk5qd\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.006357 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-catalog-content\") pod \"community-operators-bk5qd\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.006799 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.009114 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.009145 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.016116 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2mwt"] Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.023775 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9tqj\" (UniqueName: \"kubernetes.io/projected/829e346d-eb89-4705-83c4-99d02fca8971-kube-api-access-k9tqj\") pod \"community-operators-bk5qd\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.035424 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jg4wj\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.107158 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.107387 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2xvq\" (UniqueName: \"kubernetes.io/projected/dea4a6eb-c0b1-432a-81f2-e417250b0138-kube-api-access-w2xvq\") pod \"certified-operators-r2mwt\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.107428 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-catalog-content\") pod \"certified-operators-r2mwt\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.107444 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-utilities\") pod \"certified-operators-r2mwt\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.119983 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.154784 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.205905 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k7sj5"] Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.207073 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.208213 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2xvq\" (UniqueName: \"kubernetes.io/projected/dea4a6eb-c0b1-432a-81f2-e417250b0138-kube-api-access-w2xvq\") pod \"certified-operators-r2mwt\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.208254 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-catalog-content\") pod \"certified-operators-r2mwt\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.208277 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-utilities\") pod \"certified-operators-r2mwt\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.208774 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-utilities\") pod \"certified-operators-r2mwt\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.209349 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-catalog-content\") pod \"certified-operators-r2mwt\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.219500 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7sj5"] Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.227558 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2xvq\" (UniqueName: \"kubernetes.io/projected/dea4a6eb-c0b1-432a-81f2-e417250b0138-kube-api-access-w2xvq\") pod \"certified-operators-r2mwt\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.235847 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.240711 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:50 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:50 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:50 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.240751 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.310470 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-catalog-content\") pod \"community-operators-k7sj5\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.310769 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwcc\" (UniqueName: \"kubernetes.io/projected/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-kube-api-access-djwcc\") pod \"community-operators-k7sj5\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.310845 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-utilities\") pod \"community-operators-k7sj5\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.317249 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.411178 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cxrx9"] Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.411860 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-catalog-content\") pod \"community-operators-k7sj5\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.411925 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwcc\" (UniqueName: \"kubernetes.io/projected/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-kube-api-access-djwcc\") pod \"community-operators-k7sj5\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.412028 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-utilities\") pod \"community-operators-k7sj5\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.413213 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-catalog-content\") pod \"community-operators-k7sj5\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.413886 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-utilities\") pod \"community-operators-k7sj5\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.414390 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.419216 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxrx9"] Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.431942 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwcc\" (UniqueName: \"kubernetes.io/projected/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-kube-api-access-djwcc\") pod \"community-operators-k7sj5\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.494070 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk5qd"] Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.514322 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmfp\" (UniqueName: \"kubernetes.io/projected/ede22568-7d51-42d6-b96d-1783c4e5b370-kube-api-access-wzmfp\") pod \"certified-operators-cxrx9\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.514376 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-utilities\") pod \"certified-operators-cxrx9\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.514441 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-catalog-content\") pod \"certified-operators-cxrx9\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.531560 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:42:50 crc kubenswrapper[4874]: W0122 11:42:50.532649 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod829e346d_eb89_4705_83c4_99d02fca8971.slice/crio-06e653a8a66624e58a13fce521592f82bf5da1e8eff383092cdaf8dbc041daca WatchSource:0}: Error finding container 06e653a8a66624e58a13fce521592f82bf5da1e8eff383092cdaf8dbc041daca: Status 404 returned error can't find the container with id 06e653a8a66624e58a13fce521592f82bf5da1e8eff383092cdaf8dbc041daca Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.535796 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jg4wj"] Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.615179 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-catalog-content\") pod \"certified-operators-cxrx9\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.615286 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmfp\" (UniqueName: \"kubernetes.io/projected/ede22568-7d51-42d6-b96d-1783c4e5b370-kube-api-access-wzmfp\") pod \"certified-operators-cxrx9\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.615306 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-utilities\") pod \"certified-operators-cxrx9\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.615972 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-utilities\") pod \"certified-operators-cxrx9\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.619673 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-catalog-content\") pod \"certified-operators-cxrx9\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.628678 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2mwt"] Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.640361 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmfp\" (UniqueName: \"kubernetes.io/projected/ede22568-7d51-42d6-b96d-1783c4e5b370-kube-api-access-wzmfp\") pod \"certified-operators-cxrx9\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.744766 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:42:50 crc kubenswrapper[4874]: I0122 11:42:50.772886 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-l87tw" podStartSLOduration=10.77286776 podStartE2EDuration="10.77286776s" podCreationTimestamp="2026-01-22 11:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:50.754896254 +0000 UTC m=+144.599967324" watchObservedRunningTime="2026-01-22 11:42:50.77286776 +0000 UTC m=+144.617938830" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.043418 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.044180 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk5qd" event={"ID":"829e346d-eb89-4705-83c4-99d02fca8971","Type":"ContainerStarted","Data":"06e653a8a66624e58a13fce521592f82bf5da1e8eff383092cdaf8dbc041daca"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.044208 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2mwt" event={"ID":"dea4a6eb-c0b1-432a-81f2-e417250b0138","Type":"ContainerStarted","Data":"44f0f27f3f6817f2e87df4da350d3341a31946833627d7893617f4278baf12f8"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.044220 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k7sj5"] Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.044233 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" event={"ID":"90a037cd-cacf-4706-a857-f65c8f16c384","Type":"ContainerStarted","Data":"61abb1ca8d94b8a93a2470a92da41ddacd13e2ac8b6f3b50b461fb0f48f44b61"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.044243 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l87tw" event={"ID":"7a9c3919-d376-4655-818e-3a88a8c0b883","Type":"ContainerStarted","Data":"cb0e942e428230d003297b16951490d42d8fa2b63dde46bfa4bdd15dc61317fc"} Jan 22 11:42:51 crc kubenswrapper[4874]: W0122 11:42:51.050133 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d6b2fd5_d040_4be5_afb1_a375bddd3e88.slice/crio-d963c7b2a3b3c57b49606220d41e2548892102c0a65c18048da0d0bded52f8de WatchSource:0}: Error finding container d963c7b2a3b3c57b49606220d41e2548892102c0a65c18048da0d0bded52f8de: Status 404 returned error can't find the container with id d963c7b2a3b3c57b49606220d41e2548892102c0a65c18048da0d0bded52f8de Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.236224 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:51 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:51 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:51 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.236564 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.365316 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxrx9"] Jan 22 11:42:51 crc kubenswrapper[4874]: W0122 11:42:51.421808 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podede22568_7d51_42d6_b96d_1783c4e5b370.slice/crio-eb9fe6572677e18214bcf1bd4f031854d94ab0f4aacf889a0900f36d0c7491fc WatchSource:0}: Error finding container eb9fe6572677e18214bcf1bd4f031854d94ab0f4aacf889a0900f36d0c7491fc: Status 404 returned error can't find the container with id eb9fe6572677e18214bcf1bd4f031854d94ab0f4aacf889a0900f36d0c7491fc Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.734483 4874 generic.go:334] "Generic (PLEG): container finished" podID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerID="ab54d9fab26a3ef6698d354ee6813928ad7aedd283867428b79b6242016d11de" exitCode=0 Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.734801 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7sj5" event={"ID":"7d6b2fd5-d040-4be5-afb1-a375bddd3e88","Type":"ContainerDied","Data":"ab54d9fab26a3ef6698d354ee6813928ad7aedd283867428b79b6242016d11de"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.734827 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7sj5" event={"ID":"7d6b2fd5-d040-4be5-afb1-a375bddd3e88","Type":"ContainerStarted","Data":"d963c7b2a3b3c57b49606220d41e2548892102c0a65c18048da0d0bded52f8de"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.736326 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.737128 4874 generic.go:334] "Generic (PLEG): container finished" podID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerID="769e221159e673ec03f3822d5d4059809f4101a42a4ddda5d439bdab06ed4064" exitCode=0 Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.737265 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxrx9" event={"ID":"ede22568-7d51-42d6-b96d-1783c4e5b370","Type":"ContainerDied","Data":"769e221159e673ec03f3822d5d4059809f4101a42a4ddda5d439bdab06ed4064"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.737333 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxrx9" event={"ID":"ede22568-7d51-42d6-b96d-1783c4e5b370","Type":"ContainerStarted","Data":"eb9fe6572677e18214bcf1bd4f031854d94ab0f4aacf889a0900f36d0c7491fc"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.739370 4874 generic.go:334] "Generic (PLEG): container finished" podID="829e346d-eb89-4705-83c4-99d02fca8971" containerID="e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc" exitCode=0 Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.739442 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk5qd" event={"ID":"829e346d-eb89-4705-83c4-99d02fca8971","Type":"ContainerDied","Data":"e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.740971 4874 generic.go:334] "Generic (PLEG): container finished" podID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerID="f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61" exitCode=0 Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.741005 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2mwt" event={"ID":"dea4a6eb-c0b1-432a-81f2-e417250b0138","Type":"ContainerDied","Data":"f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.743673 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" event={"ID":"90a037cd-cacf-4706-a857-f65c8f16c384","Type":"ContainerStarted","Data":"e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe"} Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.796997 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" podStartSLOduration=127.796967691 podStartE2EDuration="2m7.796967691s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:42:51.79347108 +0000 UTC m=+145.638542170" watchObservedRunningTime="2026-01-22 11:42:51.796967691 +0000 UTC m=+145.642038771" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.805962 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xhb8k"] Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.806965 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.808546 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.839198 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhb8k"] Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.895260 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.895881 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.897689 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.897874 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.902896 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.943879 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-catalog-content\") pod \"redhat-marketplace-xhb8k\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.943937 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-utilities\") pod \"redhat-marketplace-xhb8k\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:51 crc kubenswrapper[4874]: I0122 11:42:51.944065 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79nw7\" (UniqueName: \"kubernetes.io/projected/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-kube-api-access-79nw7\") pod \"redhat-marketplace-xhb8k\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.045650 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.045720 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-catalog-content\") pod \"redhat-marketplace-xhb8k\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.045833 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-utilities\") pod \"redhat-marketplace-xhb8k\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.045952 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.046133 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79nw7\" (UniqueName: \"kubernetes.io/projected/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-kube-api-access-79nw7\") pod \"redhat-marketplace-xhb8k\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.046292 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-catalog-content\") pod \"redhat-marketplace-xhb8k\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.046306 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-utilities\") pod \"redhat-marketplace-xhb8k\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.064218 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79nw7\" (UniqueName: \"kubernetes.io/projected/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-kube-api-access-79nw7\") pod \"redhat-marketplace-xhb8k\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.123174 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.147222 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.147311 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.147373 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.161972 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.212268 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pbbps"] Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.213855 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.221656 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.226364 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbbps"] Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.241289 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:52 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:52 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:52 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.241331 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.351100 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-utilities\") pod \"redhat-marketplace-pbbps\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.351198 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-catalog-content\") pod \"redhat-marketplace-pbbps\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.351316 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxzq\" (UniqueName: \"kubernetes.io/projected/194d73ac-fe2b-4e80-b03b-c1b780b55990-kube-api-access-jrxzq\") pod \"redhat-marketplace-pbbps\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.374525 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhb8k"] Jan 22 11:42:52 crc kubenswrapper[4874]: W0122 11:42:52.422463 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09a47d07_9bf6_4033_8c08_cc3aef9fe4f4.slice/crio-4f0dfc138b95fd589daf732c95563b35b123c0075950db736b607653c673e3e1 WatchSource:0}: Error finding container 4f0dfc138b95fd589daf732c95563b35b123c0075950db736b607653c673e3e1: Status 404 returned error can't find the container with id 4f0dfc138b95fd589daf732c95563b35b123c0075950db736b607653c673e3e1 Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.452966 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-catalog-content\") pod \"redhat-marketplace-pbbps\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.453065 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrxzq\" (UniqueName: \"kubernetes.io/projected/194d73ac-fe2b-4e80-b03b-c1b780b55990-kube-api-access-jrxzq\") pod \"redhat-marketplace-pbbps\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.453109 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-utilities\") pod \"redhat-marketplace-pbbps\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.453544 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-catalog-content\") pod \"redhat-marketplace-pbbps\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.453555 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-utilities\") pod \"redhat-marketplace-pbbps\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.471707 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrxzq\" (UniqueName: \"kubernetes.io/projected/194d73ac-fe2b-4e80-b03b-c1b780b55990-kube-api-access-jrxzq\") pod \"redhat-marketplace-pbbps\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.495569 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.539199 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.560741 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.757309 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.757344 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.757391 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.765213 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.781467 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.790138 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.820320 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5bfa0c46-abaf-4d63-964c-d8a91fafbc82","Type":"ContainerStarted","Data":"38b9f2be77edd43a5d803726e5294fbc3482629ac4f147f216e1e74861fc0fee"} Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.838732 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.848626 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.860047 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.867087 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.890177 4874 generic.go:334] "Generic (PLEG): container finished" podID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerID="af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c" exitCode=0 Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.891544 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhb8k" event={"ID":"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4","Type":"ContainerDied","Data":"af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c"} Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.891587 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhb8k" event={"ID":"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4","Type":"ContainerStarted","Data":"4f0dfc138b95fd589daf732c95563b35b123c0075950db736b607653c673e3e1"} Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.891638 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:42:52 crc kubenswrapper[4874]: I0122 11:42:52.904643 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbbps"] Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.013318 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s2gqx"] Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.016899 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.019368 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.026060 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2gqx"] Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.107274 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-jgw4c container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.107328 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jgw4c" podUID="3b9983ba-ed8d-4654-ba74-f25433aa7ee7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.107547 4874 patch_prober.go:28] interesting pod/downloads-7954f5f757-jgw4c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.107576 4874 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jgw4c" podUID="3b9983ba-ed8d-4654-ba74-f25433aa7ee7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.140675 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.163157 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-catalog-content\") pod \"redhat-operators-s2gqx\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.163748 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92txx\" (UniqueName: \"kubernetes.io/projected/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-kube-api-access-92txx\") pod \"redhat-operators-s2gqx\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.163811 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-utilities\") pod \"redhat-operators-s2gqx\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.173183 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.174560 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.189196 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.189234 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.191037 4874 patch_prober.go:28] interesting pod/console-f9d7485db-wws2s container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.191103 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wws2s" podUID="2dad3db6-cddd-457d-8efa-908257ef7cc5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.194550 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.235876 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.244971 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:53 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:53 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:53 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.245322 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.265904 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92txx\" (UniqueName: \"kubernetes.io/projected/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-kube-api-access-92txx\") pod \"redhat-operators-s2gqx\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.265952 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-utilities\") pod \"redhat-operators-s2gqx\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.265986 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-catalog-content\") pod \"redhat-operators-s2gqx\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.268226 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-utilities\") pod \"redhat-operators-s2gqx\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.269338 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-catalog-content\") pod \"redhat-operators-s2gqx\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.315481 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92txx\" (UniqueName: \"kubernetes.io/projected/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-kube-api-access-92txx\") pod \"redhat-operators-s2gqx\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.346737 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.431217 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chgtv"] Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.440668 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.441090 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.463189 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chgtv"] Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.575316 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87lj\" (UniqueName: \"kubernetes.io/projected/25f55928-a244-44f3-83a5-b6bdf551bda6-kube-api-access-v87lj\") pod \"redhat-operators-chgtv\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.575369 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-catalog-content\") pod \"redhat-operators-chgtv\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.575535 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-utilities\") pod \"redhat-operators-chgtv\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.676620 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v87lj\" (UniqueName: \"kubernetes.io/projected/25f55928-a244-44f3-83a5-b6bdf551bda6-kube-api-access-v87lj\") pod \"redhat-operators-chgtv\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.676683 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-catalog-content\") pod \"redhat-operators-chgtv\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.676719 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-utilities\") pod \"redhat-operators-chgtv\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.677288 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-utilities\") pod \"redhat-operators-chgtv\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.677895 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-catalog-content\") pod \"redhat-operators-chgtv\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.697351 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v87lj\" (UniqueName: \"kubernetes.io/projected/25f55928-a244-44f3-83a5-b6bdf551bda6-kube-api-access-v87lj\") pod \"redhat-operators-chgtv\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.720844 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2gqx"] Jan 22 11:42:53 crc kubenswrapper[4874]: W0122 11:42:53.757954 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d07947b_508d_4f12_ba1b_2d5f24a6db2c.slice/crio-880843b24ca67a3147842f7c7ec48125fab7fce5de031516ba04e79aff10f06a WatchSource:0}: Error finding container 880843b24ca67a3147842f7c7ec48125fab7fce5de031516ba04e79aff10f06a: Status 404 returned error can't find the container with id 880843b24ca67a3147842f7c7ec48125fab7fce5de031516ba04e79aff10f06a Jan 22 11:42:53 crc kubenswrapper[4874]: W0122 11:42:53.786764 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-8976c6b5a82cbf91c93238a03097b8c9df4789af32b7cf19a40426492a6a2a0b WatchSource:0}: Error finding container 8976c6b5a82cbf91c93238a03097b8c9df4789af32b7cf19a40426492a6a2a0b: Status 404 returned error can't find the container with id 8976c6b5a82cbf91c93238a03097b8c9df4789af32b7cf19a40426492a6a2a0b Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.826985 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.902369 4874 generic.go:334] "Generic (PLEG): container finished" podID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerID="1ecd914b99d6525c762a368e3ff1d7acae50f9b9d1238be939d00bb461aca834" exitCode=0 Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.902491 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbbps" event={"ID":"194d73ac-fe2b-4e80-b03b-c1b780b55990","Type":"ContainerDied","Data":"1ecd914b99d6525c762a368e3ff1d7acae50f9b9d1238be939d00bb461aca834"} Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.902519 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbbps" event={"ID":"194d73ac-fe2b-4e80-b03b-c1b780b55990","Type":"ContainerStarted","Data":"3b5a39abee6782e33316f7969db2ab23c5d711ceba05225f55e2ae3fb886847c"} Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.904238 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2gqx" event={"ID":"2d07947b-508d-4f12-ba1b-2d5f24a6db2c","Type":"ContainerStarted","Data":"880843b24ca67a3147842f7c7ec48125fab7fce5de031516ba04e79aff10f06a"} Jan 22 11:42:53 crc kubenswrapper[4874]: W0122 11:42:53.905280 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-86651d95f7a3ee7b3be1f4b424e626a1f03634119a3491f3dc562bc6709a4f14 WatchSource:0}: Error finding container 86651d95f7a3ee7b3be1f4b424e626a1f03634119a3491f3dc562bc6709a4f14: Status 404 returned error can't find the container with id 86651d95f7a3ee7b3be1f4b424e626a1f03634119a3491f3dc562bc6709a4f14 Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.910743 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7b1a3cd0f3714333cdad6aaa71b8709438bbefc59888541682aea1b2c0a1a6d2"} Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.910789 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d7338913815c5a3aedcf53c4f221dd3462e33965c1aac01ecf2ee057baef0eec"} Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.924297 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8976c6b5a82cbf91c93238a03097b8c9df4789af32b7cf19a40426492a6a2a0b"} Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.948064 4874 generic.go:334] "Generic (PLEG): container finished" podID="5bfa0c46-abaf-4d63-964c-d8a91fafbc82" containerID="127975d4720e9bff00fac314035fd53ba7989af2ade4f2461c7a1cbf032b68a0" exitCode=0 Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.948539 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5bfa0c46-abaf-4d63-964c-d8a91fafbc82","Type":"ContainerDied","Data":"127975d4720e9bff00fac314035fd53ba7989af2ade4f2461c7a1cbf032b68a0"} Jan 22 11:42:53 crc kubenswrapper[4874]: I0122 11:42:53.967618 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cqkbr" Jan 22 11:42:54 crc kubenswrapper[4874]: I0122 11:42:54.235823 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:54 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:54 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:54 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:54 crc kubenswrapper[4874]: I0122 11:42:54.236044 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:54 crc kubenswrapper[4874]: I0122 11:42:54.322748 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chgtv"] Jan 22 11:42:54 crc kubenswrapper[4874]: W0122 11:42:54.343542 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25f55928_a244_44f3_83a5_b6bdf551bda6.slice/crio-f695035bc56b57dcd6c7f8fc2e3f5c5198706ed0269a97d26b4dab2f6db84183 WatchSource:0}: Error finding container f695035bc56b57dcd6c7f8fc2e3f5c5198706ed0269a97d26b4dab2f6db84183: Status 404 returned error can't find the container with id f695035bc56b57dcd6c7f8fc2e3f5c5198706ed0269a97d26b4dab2f6db84183 Jan 22 11:42:54 crc kubenswrapper[4874]: I0122 11:42:54.990021 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1814b08775c033da9be344eecd2d9fdad98d56c9e03566ca118519e42a96b535"} Jan 22 11:42:54 crc kubenswrapper[4874]: I0122 11:42:54.990322 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:42:54 crc kubenswrapper[4874]: I0122 11:42:54.996567 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e7640c9b37b5d1dd90fc0a4bd9176f6aea51aec5b3333bc3882dc215a8ce61a4"} Jan 22 11:42:54 crc kubenswrapper[4874]: I0122 11:42:54.996627 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"86651d95f7a3ee7b3be1f4b424e626a1f03634119a3491f3dc562bc6709a4f14"} Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.038916 4874 generic.go:334] "Generic (PLEG): container finished" podID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerID="a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d" exitCode=0 Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.039218 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chgtv" event={"ID":"25f55928-a244-44f3-83a5-b6bdf551bda6","Type":"ContainerDied","Data":"a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d"} Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.039261 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chgtv" event={"ID":"25f55928-a244-44f3-83a5-b6bdf551bda6","Type":"ContainerStarted","Data":"f695035bc56b57dcd6c7f8fc2e3f5c5198706ed0269a97d26b4dab2f6db84183"} Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.052707 4874 generic.go:334] "Generic (PLEG): container finished" podID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerID="b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef" exitCode=0 Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.053549 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2gqx" event={"ID":"2d07947b-508d-4f12-ba1b-2d5f24a6db2c","Type":"ContainerDied","Data":"b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef"} Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.238726 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:55 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:55 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:55 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.238776 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.565673 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.611433 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kube-api-access\") pod \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\" (UID: \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\") " Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.611539 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kubelet-dir\") pod \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\" (UID: \"5bfa0c46-abaf-4d63-964c-d8a91fafbc82\") " Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.611799 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5bfa0c46-abaf-4d63-964c-d8a91fafbc82" (UID: "5bfa0c46-abaf-4d63-964c-d8a91fafbc82"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.631963 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5bfa0c46-abaf-4d63-964c-d8a91fafbc82" (UID: "5bfa0c46-abaf-4d63-964c-d8a91fafbc82"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.713245 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:42:55 crc kubenswrapper[4874]: I0122 11:42:55.713609 4874 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bfa0c46-abaf-4d63-964c-d8a91fafbc82-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:42:56 crc kubenswrapper[4874]: I0122 11:42:56.087427 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 11:42:56 crc kubenswrapper[4874]: I0122 11:42:56.094517 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5bfa0c46-abaf-4d63-964c-d8a91fafbc82","Type":"ContainerDied","Data":"38b9f2be77edd43a5d803726e5294fbc3482629ac4f147f216e1e74861fc0fee"} Jan 22 11:42:56 crc kubenswrapper[4874]: I0122 11:42:56.094569 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b9f2be77edd43a5d803726e5294fbc3482629ac4f147f216e1e74861fc0fee" Jan 22 11:42:56 crc kubenswrapper[4874]: I0122 11:42:56.236129 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:56 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:56 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:56 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:56 crc kubenswrapper[4874]: I0122 11:42:56.236187 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.106939 4874 generic.go:334] "Generic (PLEG): container finished" podID="0a15050a-7cbd-40f6-a656-a68293c0878a" containerID="059406f38b7eefe033761eb54f07a9cd0f802e07f88ff68ccbdb582fae9d8d28" exitCode=0 Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.106992 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" event={"ID":"0a15050a-7cbd-40f6-a656-a68293c0878a","Type":"ContainerDied","Data":"059406f38b7eefe033761eb54f07a9cd0f802e07f88ff68ccbdb582fae9d8d28"} Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.234680 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:57 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:57 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:57 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.234741 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.894533 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 11:42:57 crc kubenswrapper[4874]: E0122 11:42:57.894744 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfa0c46-abaf-4d63-964c-d8a91fafbc82" containerName="pruner" Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.894756 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfa0c46-abaf-4d63-964c-d8a91fafbc82" containerName="pruner" Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.894867 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bfa0c46-abaf-4d63-964c-d8a91fafbc82" containerName="pruner" Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.895214 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.899206 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.899330 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.913325 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.971795 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:42:57 crc kubenswrapper[4874]: I0122 11:42:57.972158 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.073734 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.073801 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.073876 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.097923 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.225708 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.235733 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:58 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:58 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:58 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.235785 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.595145 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.684065 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjztm\" (UniqueName: \"kubernetes.io/projected/0a15050a-7cbd-40f6-a656-a68293c0878a-kube-api-access-fjztm\") pod \"0a15050a-7cbd-40f6-a656-a68293c0878a\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.684135 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a15050a-7cbd-40f6-a656-a68293c0878a-config-volume\") pod \"0a15050a-7cbd-40f6-a656-a68293c0878a\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.684187 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a15050a-7cbd-40f6-a656-a68293c0878a-secret-volume\") pod \"0a15050a-7cbd-40f6-a656-a68293c0878a\" (UID: \"0a15050a-7cbd-40f6-a656-a68293c0878a\") " Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.687264 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a15050a-7cbd-40f6-a656-a68293c0878a-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a15050a-7cbd-40f6-a656-a68293c0878a" (UID: "0a15050a-7cbd-40f6-a656-a68293c0878a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.695695 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a15050a-7cbd-40f6-a656-a68293c0878a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a15050a-7cbd-40f6-a656-a68293c0878a" (UID: "0a15050a-7cbd-40f6-a656-a68293c0878a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.695807 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a15050a-7cbd-40f6-a656-a68293c0878a-kube-api-access-fjztm" (OuterVolumeSpecName: "kube-api-access-fjztm") pod "0a15050a-7cbd-40f6-a656-a68293c0878a" (UID: "0a15050a-7cbd-40f6-a656-a68293c0878a"). InnerVolumeSpecName "kube-api-access-fjztm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.777910 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.792793 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a15050a-7cbd-40f6-a656-a68293c0878a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.792829 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjztm\" (UniqueName: \"kubernetes.io/projected/0a15050a-7cbd-40f6-a656-a68293c0878a-kube-api-access-fjztm\") on node \"crc\" DevicePath \"\"" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.792838 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a15050a-7cbd-40f6-a656-a68293c0878a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:42:58 crc kubenswrapper[4874]: I0122 11:42:58.819948 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kxvzk" Jan 22 11:42:59 crc kubenswrapper[4874]: I0122 11:42:59.168577 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" Jan 22 11:42:59 crc kubenswrapper[4874]: I0122 11:42:59.168584 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx" event={"ID":"0a15050a-7cbd-40f6-a656-a68293c0878a","Type":"ContainerDied","Data":"c99ae11d334a79b6695ceefd2cce7368234bf383f78e3abddc229aa3df93fe78"} Jan 22 11:42:59 crc kubenswrapper[4874]: I0122 11:42:59.168985 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99ae11d334a79b6695ceefd2cce7368234bf383f78e3abddc229aa3df93fe78" Jan 22 11:42:59 crc kubenswrapper[4874]: I0122 11:42:59.171594 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"65cb8670-9db0-44c0-9c3e-b2d669f62ba5","Type":"ContainerStarted","Data":"5bec84c9490bc682a6fa2fdebd7c428a191648fb4264b6efafc24df59962bdff"} Jan 22 11:42:59 crc kubenswrapper[4874]: I0122 11:42:59.239467 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:42:59 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:42:59 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:42:59 crc kubenswrapper[4874]: healthz check failed Jan 22 11:42:59 crc kubenswrapper[4874]: I0122 11:42:59.239514 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:43:00 crc kubenswrapper[4874]: I0122 11:43:00.234741 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:43:00 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:43:00 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:43:00 crc kubenswrapper[4874]: healthz check failed Jan 22 11:43:00 crc kubenswrapper[4874]: I0122 11:43:00.234820 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:43:01 crc kubenswrapper[4874]: I0122 11:43:01.183504 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"65cb8670-9db0-44c0-9c3e-b2d669f62ba5","Type":"ContainerStarted","Data":"65f182bef3594b8ab528ba84f8dc70d785e8c2cabbc772465cad9deb5da116cd"} Jan 22 11:43:01 crc kubenswrapper[4874]: I0122 11:43:01.235686 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:43:01 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:43:01 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:43:01 crc kubenswrapper[4874]: healthz check failed Jan 22 11:43:01 crc kubenswrapper[4874]: I0122 11:43:01.235751 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:43:02 crc kubenswrapper[4874]: I0122 11:43:02.191946 4874 generic.go:334] "Generic (PLEG): container finished" podID="65cb8670-9db0-44c0-9c3e-b2d669f62ba5" containerID="65f182bef3594b8ab528ba84f8dc70d785e8c2cabbc772465cad9deb5da116cd" exitCode=0 Jan 22 11:43:02 crc kubenswrapper[4874]: I0122 11:43:02.192010 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"65cb8670-9db0-44c0-9c3e-b2d669f62ba5","Type":"ContainerDied","Data":"65f182bef3594b8ab528ba84f8dc70d785e8c2cabbc772465cad9deb5da116cd"} Jan 22 11:43:02 crc kubenswrapper[4874]: I0122 11:43:02.235411 4874 patch_prober.go:28] interesting pod/router-default-5444994796-5cczs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:43:02 crc kubenswrapper[4874]: [-]has-synced failed: reason withheld Jan 22 11:43:02 crc kubenswrapper[4874]: [+]process-running ok Jan 22 11:43:02 crc kubenswrapper[4874]: healthz check failed Jan 22 11:43:02 crc kubenswrapper[4874]: I0122 11:43:02.235555 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cczs" podUID="e9446a39-9776-4de8-9137-b8952d336419" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:43:03 crc kubenswrapper[4874]: I0122 11:43:03.111646 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jgw4c" Jan 22 11:43:03 crc kubenswrapper[4874]: I0122 11:43:03.189062 4874 patch_prober.go:28] interesting pod/console-f9d7485db-wws2s container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 22 11:43:03 crc kubenswrapper[4874]: I0122 11:43:03.189142 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wws2s" podUID="2dad3db6-cddd-457d-8efa-908257ef7cc5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 22 11:43:03 crc kubenswrapper[4874]: I0122 11:43:03.236115 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:43:03 crc kubenswrapper[4874]: I0122 11:43:03.240890 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5cczs" Jan 22 11:43:06 crc kubenswrapper[4874]: I0122 11:43:06.513779 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:43:06 crc kubenswrapper[4874]: I0122 11:43:06.523662 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5451fbab-ebad-42e7-bb80-f94bad10d571-metrics-certs\") pod \"network-metrics-daemon-lr2vd\" (UID: \"5451fbab-ebad-42e7-bb80-f94bad10d571\") " pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:43:06 crc kubenswrapper[4874]: I0122 11:43:06.652200 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lr2vd" Jan 22 11:43:07 crc kubenswrapper[4874]: I0122 11:43:07.473390 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:43:07 crc kubenswrapper[4874]: I0122 11:43:07.629981 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kubelet-dir\") pod \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\" (UID: \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\") " Jan 22 11:43:07 crc kubenswrapper[4874]: I0122 11:43:07.630316 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kube-api-access\") pod \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\" (UID: \"65cb8670-9db0-44c0-9c3e-b2d669f62ba5\") " Jan 22 11:43:07 crc kubenswrapper[4874]: I0122 11:43:07.630152 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "65cb8670-9db0-44c0-9c3e-b2d669f62ba5" (UID: "65cb8670-9db0-44c0-9c3e-b2d669f62ba5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:43:07 crc kubenswrapper[4874]: I0122 11:43:07.633810 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "65cb8670-9db0-44c0-9c3e-b2d669f62ba5" (UID: "65cb8670-9db0-44c0-9c3e-b2d669f62ba5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:43:07 crc kubenswrapper[4874]: I0122 11:43:07.731389 4874 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:43:07 crc kubenswrapper[4874]: I0122 11:43:07.731451 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cb8670-9db0-44c0-9c3e-b2d669f62ba5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:43:08 crc kubenswrapper[4874]: I0122 11:43:08.238153 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"65cb8670-9db0-44c0-9c3e-b2d669f62ba5","Type":"ContainerDied","Data":"5bec84c9490bc682a6fa2fdebd7c428a191648fb4264b6efafc24df59962bdff"} Jan 22 11:43:08 crc kubenswrapper[4874]: I0122 11:43:08.238220 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bec84c9490bc682a6fa2fdebd7c428a191648fb4264b6efafc24df59962bdff" Jan 22 11:43:08 crc kubenswrapper[4874]: I0122 11:43:08.238243 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 11:43:10 crc kubenswrapper[4874]: I0122 11:43:10.242709 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:43:13 crc kubenswrapper[4874]: I0122 11:43:13.195433 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:43:13 crc kubenswrapper[4874]: I0122 11:43:13.202369 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wws2s" Jan 22 11:43:13 crc kubenswrapper[4874]: I0122 11:43:13.521466 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:43:13 crc kubenswrapper[4874]: I0122 11:43:13.521536 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:43:15 crc kubenswrapper[4874]: I0122 11:43:15.843009 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lr2vd"] Jan 22 11:43:23 crc kubenswrapper[4874]: I0122 11:43:23.445121 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vqrw2" Jan 22 11:43:25 crc kubenswrapper[4874]: E0122 11:43:25.386271 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 11:43:25 crc kubenswrapper[4874]: E0122 11:43:25.386513 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrxzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pbbps_openshift-marketplace(194d73ac-fe2b-4e80-b03b-c1b780b55990): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 11:43:25 crc kubenswrapper[4874]: E0122 11:43:25.387637 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pbbps" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" Jan 22 11:43:26 crc kubenswrapper[4874]: E0122 11:43:26.983077 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 11:43:26 crc kubenswrapper[4874]: E0122 11:43:26.983550 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9tqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bk5qd_openshift-marketplace(829e346d-eb89-4705-83c4-99d02fca8971): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 11:43:26 crc kubenswrapper[4874]: E0122 11:43:26.984725 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bk5qd" podUID="829e346d-eb89-4705-83c4-99d02fca8971" Jan 22 11:43:30 crc kubenswrapper[4874]: E0122 11:43:30.429367 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 11:43:30 crc kubenswrapper[4874]: E0122 11:43:30.429883 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92txx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s2gqx_openshift-marketplace(2d07947b-508d-4f12-ba1b-2d5f24a6db2c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 11:43:30 crc kubenswrapper[4874]: E0122 11:43:30.431098 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s2gqx" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" Jan 22 11:43:30 crc kubenswrapper[4874]: E0122 11:43:30.794457 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 11:43:30 crc kubenswrapper[4874]: E0122 11:43:30.794730 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79nw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xhb8k_openshift-marketplace(09a47d07-9bf6-4033-8c08-cc3aef9fe4f4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 11:43:30 crc kubenswrapper[4874]: E0122 11:43:30.795911 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xhb8k" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.094958 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 11:43:32 crc kubenswrapper[4874]: E0122 11:43:32.095179 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cb8670-9db0-44c0-9c3e-b2d669f62ba5" containerName="pruner" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.095189 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cb8670-9db0-44c0-9c3e-b2d669f62ba5" containerName="pruner" Jan 22 11:43:32 crc kubenswrapper[4874]: E0122 11:43:32.095209 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a15050a-7cbd-40f6-a656-a68293c0878a" containerName="collect-profiles" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.095214 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a15050a-7cbd-40f6-a656-a68293c0878a" containerName="collect-profiles" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.095305 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cb8670-9db0-44c0-9c3e-b2d669f62ba5" containerName="pruner" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.095319 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a15050a-7cbd-40f6-a656-a68293c0878a" containerName="collect-profiles" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.095756 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.098023 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.098141 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.104295 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.267934 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57d32e28-7549-499c-a0bb-5ab789653d5e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"57d32e28-7549-499c-a0bb-5ab789653d5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.268015 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57d32e28-7549-499c-a0bb-5ab789653d5e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"57d32e28-7549-499c-a0bb-5ab789653d5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.368649 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57d32e28-7549-499c-a0bb-5ab789653d5e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"57d32e28-7549-499c-a0bb-5ab789653d5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.368723 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57d32e28-7549-499c-a0bb-5ab789653d5e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"57d32e28-7549-499c-a0bb-5ab789653d5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.368824 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57d32e28-7549-499c-a0bb-5ab789653d5e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"57d32e28-7549-499c-a0bb-5ab789653d5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.392438 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57d32e28-7549-499c-a0bb-5ab789653d5e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"57d32e28-7549-499c-a0bb-5ab789653d5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:32 crc kubenswrapper[4874]: I0122 11:43:32.413655 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:33 crc kubenswrapper[4874]: I0122 11:43:33.158219 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 11:43:34 crc kubenswrapper[4874]: E0122 11:43:34.705952 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pbbps" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" Jan 22 11:43:34 crc kubenswrapper[4874]: E0122 11:43:34.706278 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bk5qd" podUID="829e346d-eb89-4705-83c4-99d02fca8971" Jan 22 11:43:34 crc kubenswrapper[4874]: E0122 11:43:34.706305 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s2gqx" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" Jan 22 11:43:34 crc kubenswrapper[4874]: E0122 11:43:34.706348 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xhb8k" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" Jan 22 11:43:35 crc kubenswrapper[4874]: I0122 11:43:35.165312 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 11:43:35 crc kubenswrapper[4874]: W0122 11:43:35.177115 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod57d32e28_7549_499c_a0bb_5ab789653d5e.slice/crio-75b613d51d19f18b727cdaed48bc87cc2bc83f0df15b5099a878a65b74a0dd05 WatchSource:0}: Error finding container 75b613d51d19f18b727cdaed48bc87cc2bc83f0df15b5099a878a65b74a0dd05: Status 404 returned error can't find the container with id 75b613d51d19f18b727cdaed48bc87cc2bc83f0df15b5099a878a65b74a0dd05 Jan 22 11:43:35 crc kubenswrapper[4874]: I0122 11:43:35.389849 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"57d32e28-7549-499c-a0bb-5ab789653d5e","Type":"ContainerStarted","Data":"75b613d51d19f18b727cdaed48bc87cc2bc83f0df15b5099a878a65b74a0dd05"} Jan 22 11:43:35 crc kubenswrapper[4874]: I0122 11:43:35.391210 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" event={"ID":"5451fbab-ebad-42e7-bb80-f94bad10d571","Type":"ContainerStarted","Data":"40b8458e9937ed678c65a078d747297aea6e373c424f3a601789df0bff8b1532"} Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.613773 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.614341 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djwcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k7sj5_openshift-marketplace(7d6b2fd5-d040-4be5-afb1-a375bddd3e88): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.615620 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-k7sj5" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.670475 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.670885 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v87lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-chgtv_openshift-marketplace(25f55928-a244-44f3-83a5-b6bdf551bda6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.672097 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-chgtv" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.735277 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.735858 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzmfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cxrx9_openshift-marketplace(ede22568-7d51-42d6-b96d-1783c4e5b370): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.737597 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cxrx9" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.802567 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.802728 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2xvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r2mwt_openshift-marketplace(dea4a6eb-c0b1-432a-81f2-e417250b0138): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 11:43:35 crc kubenswrapper[4874]: E0122 11:43:35.803971 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-r2mwt" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" Jan 22 11:43:36 crc kubenswrapper[4874]: I0122 11:43:36.402788 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" event={"ID":"5451fbab-ebad-42e7-bb80-f94bad10d571","Type":"ContainerStarted","Data":"2a9c9d16db89a1737e501b6a2b8030392d73684bf821b2800db34795c8e53a93"} Jan 22 11:43:36 crc kubenswrapper[4874]: I0122 11:43:36.402861 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lr2vd" event={"ID":"5451fbab-ebad-42e7-bb80-f94bad10d571","Type":"ContainerStarted","Data":"bb0f7cb0546d70aa3da2091415a077df2f3e60d50033607434bef714bfd4c7a1"} Jan 22 11:43:36 crc kubenswrapper[4874]: I0122 11:43:36.407917 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"57d32e28-7549-499c-a0bb-5ab789653d5e","Type":"ContainerStarted","Data":"b0ec8f67977f1acc68b563b2d55757cc5fbf02cd92b79e7cffde786c8ba8fef4"} Jan 22 11:43:36 crc kubenswrapper[4874]: E0122 11:43:36.410507 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cxrx9" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" Jan 22 11:43:36 crc kubenswrapper[4874]: E0122 11:43:36.410873 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-chgtv" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" Jan 22 11:43:36 crc kubenswrapper[4874]: E0122 11:43:36.411968 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-k7sj5" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" Jan 22 11:43:36 crc kubenswrapper[4874]: E0122 11:43:36.412968 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r2mwt" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" Jan 22 11:43:36 crc kubenswrapper[4874]: I0122 11:43:36.420732 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lr2vd" podStartSLOduration=172.420713658 podStartE2EDuration="2m52.420713658s" podCreationTimestamp="2026-01-22 11:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:43:36.415830565 +0000 UTC m=+190.260901655" watchObservedRunningTime="2026-01-22 11:43:36.420713658 +0000 UTC m=+190.265784728" Jan 22 11:43:36 crc kubenswrapper[4874]: I0122 11:43:36.892960 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 11:43:36 crc kubenswrapper[4874]: I0122 11:43:36.894074 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:36 crc kubenswrapper[4874]: I0122 11:43:36.916721 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.073869 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.073923 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c5453e-ced5-4d10-b696-df2a76b6a783-kube-api-access\") pod \"installer-9-crc\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.073953 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-var-lock\") pod \"installer-9-crc\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.175474 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.175537 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c5453e-ced5-4d10-b696-df2a76b6a783-kube-api-access\") pod \"installer-9-crc\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.175562 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-var-lock\") pod \"installer-9-crc\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.175594 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.175645 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-var-lock\") pod \"installer-9-crc\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.197574 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c5453e-ced5-4d10-b696-df2a76b6a783-kube-api-access\") pod \"installer-9-crc\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.216599 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.413472 4874 generic.go:334] "Generic (PLEG): container finished" podID="57d32e28-7549-499c-a0bb-5ab789653d5e" containerID="b0ec8f67977f1acc68b563b2d55757cc5fbf02cd92b79e7cffde786c8ba8fef4" exitCode=0 Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.413596 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"57d32e28-7549-499c-a0bb-5ab789653d5e","Type":"ContainerDied","Data":"b0ec8f67977f1acc68b563b2d55757cc5fbf02cd92b79e7cffde786c8ba8fef4"} Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.622490 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.687341 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.881911 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57d32e28-7549-499c-a0bb-5ab789653d5e-kube-api-access\") pod \"57d32e28-7549-499c-a0bb-5ab789653d5e\" (UID: \"57d32e28-7549-499c-a0bb-5ab789653d5e\") " Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.881984 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57d32e28-7549-499c-a0bb-5ab789653d5e-kubelet-dir\") pod \"57d32e28-7549-499c-a0bb-5ab789653d5e\" (UID: \"57d32e28-7549-499c-a0bb-5ab789653d5e\") " Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.882283 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57d32e28-7549-499c-a0bb-5ab789653d5e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "57d32e28-7549-499c-a0bb-5ab789653d5e" (UID: "57d32e28-7549-499c-a0bb-5ab789653d5e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.892150 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d32e28-7549-499c-a0bb-5ab789653d5e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "57d32e28-7549-499c-a0bb-5ab789653d5e" (UID: "57d32e28-7549-499c-a0bb-5ab789653d5e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.983218 4874 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57d32e28-7549-499c-a0bb-5ab789653d5e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:43:37 crc kubenswrapper[4874]: I0122 11:43:37.983259 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57d32e28-7549-499c-a0bb-5ab789653d5e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:43:38 crc kubenswrapper[4874]: I0122 11:43:38.419880 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"57d32e28-7549-499c-a0bb-5ab789653d5e","Type":"ContainerDied","Data":"75b613d51d19f18b727cdaed48bc87cc2bc83f0df15b5099a878a65b74a0dd05"} Jan 22 11:43:38 crc kubenswrapper[4874]: I0122 11:43:38.420216 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b613d51d19f18b727cdaed48bc87cc2bc83f0df15b5099a878a65b74a0dd05" Jan 22 11:43:38 crc kubenswrapper[4874]: I0122 11:43:38.419896 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 11:43:38 crc kubenswrapper[4874]: I0122 11:43:38.420946 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1c5453e-ced5-4d10-b696-df2a76b6a783","Type":"ContainerStarted","Data":"42a935729ff60b97150afb87bbec2f6ec94b10ae9d811cc58f762b075b789f67"} Jan 22 11:43:38 crc kubenswrapper[4874]: I0122 11:43:38.420971 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1c5453e-ced5-4d10-b696-df2a76b6a783","Type":"ContainerStarted","Data":"ba05ca2e002e9c3237d941c91912a43cc5d0958363ae275896c78814a9a08db4"} Jan 22 11:43:38 crc kubenswrapper[4874]: I0122 11:43:38.436463 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.436376612 podStartE2EDuration="2.436376612s" podCreationTimestamp="2026-01-22 11:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:43:38.432884462 +0000 UTC m=+192.277955532" watchObservedRunningTime="2026-01-22 11:43:38.436376612 +0000 UTC m=+192.281447682" Jan 22 11:43:38 crc kubenswrapper[4874]: E0122 11:43:38.501202 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod57d32e28_7549_499c_a0bb_5ab789653d5e.slice\": RecentStats: unable to find data in memory cache]" Jan 22 11:43:43 crc kubenswrapper[4874]: I0122 11:43:43.073789 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8j9tz"] Jan 22 11:43:43 crc kubenswrapper[4874]: I0122 11:43:43.520771 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:43:43 crc kubenswrapper[4874]: I0122 11:43:43.521212 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:43:48 crc kubenswrapper[4874]: I0122 11:43:48.473837 4874 generic.go:334] "Generic (PLEG): container finished" podID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerID="74aa9e987c447e71fa4ef0ece296f3c4949e2a2a7f45a155b2c483ac48eabbab" exitCode=0 Jan 22 11:43:48 crc kubenswrapper[4874]: I0122 11:43:48.473915 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbbps" event={"ID":"194d73ac-fe2b-4e80-b03b-c1b780b55990","Type":"ContainerDied","Data":"74aa9e987c447e71fa4ef0ece296f3c4949e2a2a7f45a155b2c483ac48eabbab"} Jan 22 11:43:48 crc kubenswrapper[4874]: I0122 11:43:48.477151 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk5qd" event={"ID":"829e346d-eb89-4705-83c4-99d02fca8971","Type":"ContainerStarted","Data":"231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b"} Jan 22 11:43:48 crc kubenswrapper[4874]: I0122 11:43:48.480447 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2gqx" event={"ID":"2d07947b-508d-4f12-ba1b-2d5f24a6db2c","Type":"ContainerStarted","Data":"b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec"} Jan 22 11:43:48 crc kubenswrapper[4874]: E0122 11:43:48.629953 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod829e346d_eb89_4705_83c4_99d02fca8971.slice/crio-231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod829e346d_eb89_4705_83c4_99d02fca8971.slice/crio-conmon-231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:43:49 crc kubenswrapper[4874]: I0122 11:43:49.486898 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbbps" event={"ID":"194d73ac-fe2b-4e80-b03b-c1b780b55990","Type":"ContainerStarted","Data":"0648ed6c2fadca30717bac213e382fb0e0ccb45470ec390feeb8786d37e89637"} Jan 22 11:43:49 crc kubenswrapper[4874]: I0122 11:43:49.489545 4874 generic.go:334] "Generic (PLEG): container finished" podID="829e346d-eb89-4705-83c4-99d02fca8971" containerID="231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b" exitCode=0 Jan 22 11:43:49 crc kubenswrapper[4874]: I0122 11:43:49.489596 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk5qd" event={"ID":"829e346d-eb89-4705-83c4-99d02fca8971","Type":"ContainerDied","Data":"231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b"} Jan 22 11:43:49 crc kubenswrapper[4874]: I0122 11:43:49.492973 4874 generic.go:334] "Generic (PLEG): container finished" podID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerID="b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec" exitCode=0 Jan 22 11:43:49 crc kubenswrapper[4874]: I0122 11:43:49.493044 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2gqx" event={"ID":"2d07947b-508d-4f12-ba1b-2d5f24a6db2c","Type":"ContainerDied","Data":"b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec"} Jan 22 11:43:49 crc kubenswrapper[4874]: I0122 11:43:49.501023 4874 generic.go:334] "Generic (PLEG): container finished" podID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerID="3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a" exitCode=0 Jan 22 11:43:49 crc kubenswrapper[4874]: I0122 11:43:49.501113 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2mwt" event={"ID":"dea4a6eb-c0b1-432a-81f2-e417250b0138","Type":"ContainerDied","Data":"3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a"} Jan 22 11:43:49 crc kubenswrapper[4874]: I0122 11:43:49.514343 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pbbps" podStartSLOduration=2.544358945 podStartE2EDuration="57.514323802s" podCreationTimestamp="2026-01-22 11:42:52 +0000 UTC" firstStartedPulling="2026-01-22 11:42:53.909970033 +0000 UTC m=+147.755041103" lastFinishedPulling="2026-01-22 11:43:48.87993487 +0000 UTC m=+202.725005960" observedRunningTime="2026-01-22 11:43:49.506176826 +0000 UTC m=+203.351247916" watchObservedRunningTime="2026-01-22 11:43:49.514323802 +0000 UTC m=+203.359394872" Jan 22 11:43:49 crc kubenswrapper[4874]: I0122 11:43:49.514982 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhb8k" event={"ID":"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4","Type":"ContainerStarted","Data":"0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f"} Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.522330 4874 generic.go:334] "Generic (PLEG): container finished" podID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerID="0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f" exitCode=0 Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.522428 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhb8k" event={"ID":"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4","Type":"ContainerDied","Data":"0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f"} Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.524851 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxrx9" event={"ID":"ede22568-7d51-42d6-b96d-1783c4e5b370","Type":"ContainerStarted","Data":"6061b65fbd29bdf70fd1e090a65e25c9ca80adc4b7d5e07809dd238cc90776ba"} Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.527334 4874 generic.go:334] "Generic (PLEG): container finished" podID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerID="070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf" exitCode=0 Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.527421 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chgtv" event={"ID":"25f55928-a244-44f3-83a5-b6bdf551bda6","Type":"ContainerDied","Data":"070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf"} Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.529194 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk5qd" event={"ID":"829e346d-eb89-4705-83c4-99d02fca8971","Type":"ContainerStarted","Data":"051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e"} Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.532354 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2mwt" event={"ID":"dea4a6eb-c0b1-432a-81f2-e417250b0138","Type":"ContainerStarted","Data":"1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32"} Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.534297 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2gqx" event={"ID":"2d07947b-508d-4f12-ba1b-2d5f24a6db2c","Type":"ContainerStarted","Data":"d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd"} Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.588293 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r2mwt" podStartSLOduration=3.334810375 podStartE2EDuration="1m1.58827092s" podCreationTimestamp="2026-01-22 11:42:49 +0000 UTC" firstStartedPulling="2026-01-22 11:42:51.742468322 +0000 UTC m=+145.587539392" lastFinishedPulling="2026-01-22 11:43:49.995928867 +0000 UTC m=+203.840999937" observedRunningTime="2026-01-22 11:43:50.584732088 +0000 UTC m=+204.429803168" watchObservedRunningTime="2026-01-22 11:43:50.58827092 +0000 UTC m=+204.433341990" Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.605113 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bk5qd" podStartSLOduration=3.444029675 podStartE2EDuration="1m1.605093018s" podCreationTimestamp="2026-01-22 11:42:49 +0000 UTC" firstStartedPulling="2026-01-22 11:42:51.740296024 +0000 UTC m=+145.585367094" lastFinishedPulling="2026-01-22 11:43:49.901359367 +0000 UTC m=+203.746430437" observedRunningTime="2026-01-22 11:43:50.600619067 +0000 UTC m=+204.445690137" watchObservedRunningTime="2026-01-22 11:43:50.605093018 +0000 UTC m=+204.450164088" Jan 22 11:43:50 crc kubenswrapper[4874]: I0122 11:43:50.625652 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s2gqx" podStartSLOduration=3.7975601770000003 podStartE2EDuration="58.625637663s" podCreationTimestamp="2026-01-22 11:42:52 +0000 UTC" firstStartedPulling="2026-01-22 11:42:55.065568309 +0000 UTC m=+148.910639379" lastFinishedPulling="2026-01-22 11:43:49.893645785 +0000 UTC m=+203.738716865" observedRunningTime="2026-01-22 11:43:50.62493091 +0000 UTC m=+204.470001990" watchObservedRunningTime="2026-01-22 11:43:50.625637663 +0000 UTC m=+204.470708733" Jan 22 11:43:51 crc kubenswrapper[4874]: I0122 11:43:51.547580 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhb8k" event={"ID":"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4","Type":"ContainerStarted","Data":"c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a"} Jan 22 11:43:51 crc kubenswrapper[4874]: I0122 11:43:51.550115 4874 generic.go:334] "Generic (PLEG): container finished" podID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerID="6061b65fbd29bdf70fd1e090a65e25c9ca80adc4b7d5e07809dd238cc90776ba" exitCode=0 Jan 22 11:43:51 crc kubenswrapper[4874]: I0122 11:43:51.550189 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxrx9" event={"ID":"ede22568-7d51-42d6-b96d-1783c4e5b370","Type":"ContainerDied","Data":"6061b65fbd29bdf70fd1e090a65e25c9ca80adc4b7d5e07809dd238cc90776ba"} Jan 22 11:43:51 crc kubenswrapper[4874]: I0122 11:43:51.553764 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chgtv" event={"ID":"25f55928-a244-44f3-83a5-b6bdf551bda6","Type":"ContainerStarted","Data":"4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7"} Jan 22 11:43:51 crc kubenswrapper[4874]: I0122 11:43:51.555250 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7sj5" event={"ID":"7d6b2fd5-d040-4be5-afb1-a375bddd3e88","Type":"ContainerStarted","Data":"55598fd3ad7b384be613bb53ff1f155b2db0dc67c238c62da8732c037ab04867"} Jan 22 11:43:51 crc kubenswrapper[4874]: I0122 11:43:51.569780 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xhb8k" podStartSLOduration=2.600058477 podStartE2EDuration="1m0.569766373s" podCreationTimestamp="2026-01-22 11:42:51 +0000 UTC" firstStartedPulling="2026-01-22 11:42:52.994598712 +0000 UTC m=+146.839669782" lastFinishedPulling="2026-01-22 11:43:50.964306618 +0000 UTC m=+204.809377678" observedRunningTime="2026-01-22 11:43:51.569485974 +0000 UTC m=+205.414557074" watchObservedRunningTime="2026-01-22 11:43:51.569766373 +0000 UTC m=+205.414837443" Jan 22 11:43:51 crc kubenswrapper[4874]: I0122 11:43:51.655698 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chgtv" podStartSLOduration=2.792459793 podStartE2EDuration="58.655676801s" podCreationTimestamp="2026-01-22 11:42:53 +0000 UTC" firstStartedPulling="2026-01-22 11:42:55.041221321 +0000 UTC m=+148.886292391" lastFinishedPulling="2026-01-22 11:43:50.904438329 +0000 UTC m=+204.749509399" observedRunningTime="2026-01-22 11:43:51.615786449 +0000 UTC m=+205.460857529" watchObservedRunningTime="2026-01-22 11:43:51.655676801 +0000 UTC m=+205.500747871" Jan 22 11:43:52 crc kubenswrapper[4874]: I0122 11:43:52.123481 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:43:52 crc kubenswrapper[4874]: I0122 11:43:52.123576 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:43:52 crc kubenswrapper[4874]: I0122 11:43:52.539933 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:43:52 crc kubenswrapper[4874]: I0122 11:43:52.539973 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:43:52 crc kubenswrapper[4874]: I0122 11:43:52.562511 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxrx9" event={"ID":"ede22568-7d51-42d6-b96d-1783c4e5b370","Type":"ContainerStarted","Data":"88517753dde54e6943c6fe4f548a126863bb3d622e70a38576e28d6b20f09b8b"} Jan 22 11:43:52 crc kubenswrapper[4874]: I0122 11:43:52.564271 4874 generic.go:334] "Generic (PLEG): container finished" podID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerID="55598fd3ad7b384be613bb53ff1f155b2db0dc67c238c62da8732c037ab04867" exitCode=0 Jan 22 11:43:52 crc kubenswrapper[4874]: I0122 11:43:52.564360 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7sj5" event={"ID":"7d6b2fd5-d040-4be5-afb1-a375bddd3e88","Type":"ContainerDied","Data":"55598fd3ad7b384be613bb53ff1f155b2db0dc67c238c62da8732c037ab04867"} Jan 22 11:43:52 crc kubenswrapper[4874]: I0122 11:43:52.585148 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cxrx9" podStartSLOduration=2.36684956 podStartE2EDuration="1m2.58512973s" podCreationTimestamp="2026-01-22 11:42:50 +0000 UTC" firstStartedPulling="2026-01-22 11:42:51.738099595 +0000 UTC m=+145.583170665" lastFinishedPulling="2026-01-22 11:43:51.956379765 +0000 UTC m=+205.801450835" observedRunningTime="2026-01-22 11:43:52.583615253 +0000 UTC m=+206.428686313" watchObservedRunningTime="2026-01-22 11:43:52.58512973 +0000 UTC m=+206.430200800" Jan 22 11:43:52 crc kubenswrapper[4874]: I0122 11:43:52.608787 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:43:53 crc kubenswrapper[4874]: I0122 11:43:53.317081 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-xhb8k" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerName="registry-server" probeResult="failure" output=< Jan 22 11:43:53 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Jan 22 11:43:53 crc kubenswrapper[4874]: > Jan 22 11:43:53 crc kubenswrapper[4874]: I0122 11:43:53.348169 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:43:53 crc kubenswrapper[4874]: I0122 11:43:53.348211 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:43:53 crc kubenswrapper[4874]: I0122 11:43:53.577270 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7sj5" event={"ID":"7d6b2fd5-d040-4be5-afb1-a375bddd3e88","Type":"ContainerStarted","Data":"1cd3bbf2af5a2b7f7b7a4ab5adaef76db5460bdb370548bcbe2419923c557fc1"} Jan 22 11:43:53 crc kubenswrapper[4874]: I0122 11:43:53.601667 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k7sj5" podStartSLOduration=2.406929888 podStartE2EDuration="1m3.601651634s" podCreationTimestamp="2026-01-22 11:42:50 +0000 UTC" firstStartedPulling="2026-01-22 11:42:51.736017998 +0000 UTC m=+145.581089068" lastFinishedPulling="2026-01-22 11:43:52.930739744 +0000 UTC m=+206.775810814" observedRunningTime="2026-01-22 11:43:53.599068483 +0000 UTC m=+207.444139573" watchObservedRunningTime="2026-01-22 11:43:53.601651634 +0000 UTC m=+207.446722704" Jan 22 11:43:53 crc kubenswrapper[4874]: I0122 11:43:53.827369 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:43:53 crc kubenswrapper[4874]: I0122 11:43:53.827431 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:43:54 crc kubenswrapper[4874]: I0122 11:43:54.382185 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s2gqx" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerName="registry-server" probeResult="failure" output=< Jan 22 11:43:54 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Jan 22 11:43:54 crc kubenswrapper[4874]: > Jan 22 11:43:54 crc kubenswrapper[4874]: I0122 11:43:54.881423 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chgtv" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerName="registry-server" probeResult="failure" output=< Jan 22 11:43:54 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Jan 22 11:43:54 crc kubenswrapper[4874]: > Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.155620 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.155951 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.214638 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.318817 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.318922 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.388381 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.532199 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.532531 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.681471 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.746057 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.746137 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:44:00 crc kubenswrapper[4874]: I0122 11:44:00.789870 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:44:01 crc kubenswrapper[4874]: I0122 11:44:01.145122 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:44:01 crc kubenswrapper[4874]: I0122 11:44:01.145287 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:44:01 crc kubenswrapper[4874]: I0122 11:44:01.668004 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:44:01 crc kubenswrapper[4874]: I0122 11:44:01.678178 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:44:02 crc kubenswrapper[4874]: I0122 11:44:02.171888 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:44:02 crc kubenswrapper[4874]: I0122 11:44:02.242778 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:44:02 crc kubenswrapper[4874]: I0122 11:44:02.602538 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:44:02 crc kubenswrapper[4874]: I0122 11:44:02.949384 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7sj5"] Jan 22 11:44:03 crc kubenswrapper[4874]: I0122 11:44:03.147822 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxrx9"] Jan 22 11:44:03 crc kubenswrapper[4874]: I0122 11:44:03.416687 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:44:03 crc kubenswrapper[4874]: I0122 11:44:03.459982 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:44:03 crc kubenswrapper[4874]: I0122 11:44:03.642625 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cxrx9" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerName="registry-server" containerID="cri-o://88517753dde54e6943c6fe4f548a126863bb3d622e70a38576e28d6b20f09b8b" gracePeriod=2 Jan 22 11:44:03 crc kubenswrapper[4874]: I0122 11:44:03.642695 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k7sj5" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerName="registry-server" containerID="cri-o://1cd3bbf2af5a2b7f7b7a4ab5adaef76db5460bdb370548bcbe2419923c557fc1" gracePeriod=2 Jan 22 11:44:03 crc kubenswrapper[4874]: I0122 11:44:03.890700 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:44:03 crc kubenswrapper[4874]: I0122 11:44:03.951587 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:44:05 crc kubenswrapper[4874]: I0122 11:44:05.352918 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbbps"] Jan 22 11:44:05 crc kubenswrapper[4874]: I0122 11:44:05.353574 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pbbps" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerName="registry-server" containerID="cri-o://0648ed6c2fadca30717bac213e382fb0e0ccb45470ec390feeb8786d37e89637" gracePeriod=2 Jan 22 11:44:05 crc kubenswrapper[4874]: I0122 11:44:05.655076 4874 generic.go:334] "Generic (PLEG): container finished" podID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerID="1cd3bbf2af5a2b7f7b7a4ab5adaef76db5460bdb370548bcbe2419923c557fc1" exitCode=0 Jan 22 11:44:05 crc kubenswrapper[4874]: I0122 11:44:05.655157 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7sj5" event={"ID":"7d6b2fd5-d040-4be5-afb1-a375bddd3e88","Type":"ContainerDied","Data":"1cd3bbf2af5a2b7f7b7a4ab5adaef76db5460bdb370548bcbe2419923c557fc1"} Jan 22 11:44:06 crc kubenswrapper[4874]: I0122 11:44:06.665318 4874 generic.go:334] "Generic (PLEG): container finished" podID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerID="88517753dde54e6943c6fe4f548a126863bb3d622e70a38576e28d6b20f09b8b" exitCode=0 Jan 22 11:44:06 crc kubenswrapper[4874]: I0122 11:44:06.665650 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxrx9" event={"ID":"ede22568-7d51-42d6-b96d-1783c4e5b370","Type":"ContainerDied","Data":"88517753dde54e6943c6fe4f548a126863bb3d622e70a38576e28d6b20f09b8b"} Jan 22 11:44:06 crc kubenswrapper[4874]: I0122 11:44:06.667570 4874 generic.go:334] "Generic (PLEG): container finished" podID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerID="0648ed6c2fadca30717bac213e382fb0e0ccb45470ec390feeb8786d37e89637" exitCode=0 Jan 22 11:44:06 crc kubenswrapper[4874]: I0122 11:44:06.667591 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbbps" event={"ID":"194d73ac-fe2b-4e80-b03b-c1b780b55990","Type":"ContainerDied","Data":"0648ed6c2fadca30717bac213e382fb0e0ccb45470ec390feeb8786d37e89637"} Jan 22 11:44:06 crc kubenswrapper[4874]: I0122 11:44:06.877099 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:44:06 crc kubenswrapper[4874]: I0122 11:44:06.881506 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:06.999916 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-catalog-content\") pod \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:06.999977 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-utilities\") pod \"ede22568-7d51-42d6-b96d-1783c4e5b370\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:06.999996 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-catalog-content\") pod \"ede22568-7d51-42d6-b96d-1783c4e5b370\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.000050 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwcc\" (UniqueName: \"kubernetes.io/projected/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-kube-api-access-djwcc\") pod \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.000127 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-utilities\") pod \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\" (UID: \"7d6b2fd5-d040-4be5-afb1-a375bddd3e88\") " Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.000152 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzmfp\" (UniqueName: \"kubernetes.io/projected/ede22568-7d51-42d6-b96d-1783c4e5b370-kube-api-access-wzmfp\") pod \"ede22568-7d51-42d6-b96d-1783c4e5b370\" (UID: \"ede22568-7d51-42d6-b96d-1783c4e5b370\") " Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.000877 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-utilities" (OuterVolumeSpecName: "utilities") pod "ede22568-7d51-42d6-b96d-1783c4e5b370" (UID: "ede22568-7d51-42d6-b96d-1783c4e5b370"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.001181 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-utilities" (OuterVolumeSpecName: "utilities") pod "7d6b2fd5-d040-4be5-afb1-a375bddd3e88" (UID: "7d6b2fd5-d040-4be5-afb1-a375bddd3e88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.012790 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-kube-api-access-djwcc" (OuterVolumeSpecName: "kube-api-access-djwcc") pod "7d6b2fd5-d040-4be5-afb1-a375bddd3e88" (UID: "7d6b2fd5-d040-4be5-afb1-a375bddd3e88"). InnerVolumeSpecName "kube-api-access-djwcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.012958 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede22568-7d51-42d6-b96d-1783c4e5b370-kube-api-access-wzmfp" (OuterVolumeSpecName: "kube-api-access-wzmfp") pod "ede22568-7d51-42d6-b96d-1783c4e5b370" (UID: "ede22568-7d51-42d6-b96d-1783c4e5b370"). InnerVolumeSpecName "kube-api-access-wzmfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.045951 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ede22568-7d51-42d6-b96d-1783c4e5b370" (UID: "ede22568-7d51-42d6-b96d-1783c4e5b370"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.051183 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d6b2fd5-d040-4be5-afb1-a375bddd3e88" (UID: "7d6b2fd5-d040-4be5-afb1-a375bddd3e88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.071980 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.101814 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.101842 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede22568-7d51-42d6-b96d-1783c4e5b370-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.101853 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djwcc\" (UniqueName: \"kubernetes.io/projected/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-kube-api-access-djwcc\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.101861 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.101872 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzmfp\" (UniqueName: \"kubernetes.io/projected/ede22568-7d51-42d6-b96d-1783c4e5b370-kube-api-access-wzmfp\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.101880 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6b2fd5-d040-4be5-afb1-a375bddd3e88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.202328 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-catalog-content\") pod \"194d73ac-fe2b-4e80-b03b-c1b780b55990\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.202386 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-utilities\") pod \"194d73ac-fe2b-4e80-b03b-c1b780b55990\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.202490 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrxzq\" (UniqueName: \"kubernetes.io/projected/194d73ac-fe2b-4e80-b03b-c1b780b55990-kube-api-access-jrxzq\") pod \"194d73ac-fe2b-4e80-b03b-c1b780b55990\" (UID: \"194d73ac-fe2b-4e80-b03b-c1b780b55990\") " Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.204750 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-utilities" (OuterVolumeSpecName: "utilities") pod "194d73ac-fe2b-4e80-b03b-c1b780b55990" (UID: "194d73ac-fe2b-4e80-b03b-c1b780b55990"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.206328 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194d73ac-fe2b-4e80-b03b-c1b780b55990-kube-api-access-jrxzq" (OuterVolumeSpecName: "kube-api-access-jrxzq") pod "194d73ac-fe2b-4e80-b03b-c1b780b55990" (UID: "194d73ac-fe2b-4e80-b03b-c1b780b55990"). InnerVolumeSpecName "kube-api-access-jrxzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.227654 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "194d73ac-fe2b-4e80-b03b-c1b780b55990" (UID: "194d73ac-fe2b-4e80-b03b-c1b780b55990"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.304312 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.304354 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrxzq\" (UniqueName: \"kubernetes.io/projected/194d73ac-fe2b-4e80-b03b-c1b780b55990-kube-api-access-jrxzq\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.304421 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194d73ac-fe2b-4e80-b03b-c1b780b55990-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.679450 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxrx9" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.679505 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxrx9" event={"ID":"ede22568-7d51-42d6-b96d-1783c4e5b370","Type":"ContainerDied","Data":"eb9fe6572677e18214bcf1bd4f031854d94ab0f4aacf889a0900f36d0c7491fc"} Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.679603 4874 scope.go:117] "RemoveContainer" containerID="88517753dde54e6943c6fe4f548a126863bb3d622e70a38576e28d6b20f09b8b" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.685089 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbbps" event={"ID":"194d73ac-fe2b-4e80-b03b-c1b780b55990","Type":"ContainerDied","Data":"3b5a39abee6782e33316f7969db2ab23c5d711ceba05225f55e2ae3fb886847c"} Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.685235 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbbps" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.688910 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k7sj5" event={"ID":"7d6b2fd5-d040-4be5-afb1-a375bddd3e88","Type":"ContainerDied","Data":"d963c7b2a3b3c57b49606220d41e2548892102c0a65c18048da0d0bded52f8de"} Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.688934 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k7sj5" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.721814 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxrx9"] Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.723681 4874 scope.go:117] "RemoveContainer" containerID="6061b65fbd29bdf70fd1e090a65e25c9ca80adc4b7d5e07809dd238cc90776ba" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.736032 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cxrx9"] Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.745036 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k7sj5"] Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.751876 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k7sj5"] Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.756344 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbbps"] Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.759713 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbbps"] Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.764929 4874 scope.go:117] "RemoveContainer" containerID="769e221159e673ec03f3822d5d4059809f4101a42a4ddda5d439bdab06ed4064" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.778376 4874 scope.go:117] "RemoveContainer" containerID="0648ed6c2fadca30717bac213e382fb0e0ccb45470ec390feeb8786d37e89637" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.792578 4874 scope.go:117] "RemoveContainer" containerID="74aa9e987c447e71fa4ef0ece296f3c4949e2a2a7f45a155b2c483ac48eabbab" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.806848 4874 scope.go:117] "RemoveContainer" containerID="1ecd914b99d6525c762a368e3ff1d7acae50f9b9d1238be939d00bb461aca834" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.822690 4874 scope.go:117] "RemoveContainer" containerID="1cd3bbf2af5a2b7f7b7a4ab5adaef76db5460bdb370548bcbe2419923c557fc1" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.867545 4874 scope.go:117] "RemoveContainer" containerID="55598fd3ad7b384be613bb53ff1f155b2db0dc67c238c62da8732c037ab04867" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.882701 4874 scope.go:117] "RemoveContainer" containerID="ab54d9fab26a3ef6698d354ee6813928ad7aedd283867428b79b6242016d11de" Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.980049 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chgtv"] Jan 22 11:44:07 crc kubenswrapper[4874]: I0122 11:44:07.980327 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chgtv" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerName="registry-server" containerID="cri-o://4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7" gracePeriod=2 Jan 22 11:44:08 crc kubenswrapper[4874]: I0122 11:44:08.106655 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" podUID="9fd4241f-b523-4d66-bcdb-c3bb691765c9" containerName="oauth-openshift" containerID="cri-o://0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8" gracePeriod=15 Jan 22 11:44:08 crc kubenswrapper[4874]: I0122 11:44:08.727438 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" path="/var/lib/kubelet/pods/194d73ac-fe2b-4e80-b03b-c1b780b55990/volumes" Jan 22 11:44:08 crc kubenswrapper[4874]: I0122 11:44:08.728319 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" path="/var/lib/kubelet/pods/7d6b2fd5-d040-4be5-afb1-a375bddd3e88/volumes" Jan 22 11:44:08 crc kubenswrapper[4874]: I0122 11:44:08.728907 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" path="/var/lib/kubelet/pods/ede22568-7d51-42d6-b96d-1783c4e5b370/volumes" Jan 22 11:44:08 crc kubenswrapper[4874]: E0122 11:44:08.872785 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25f55928_a244_44f3_83a5_b6bdf551bda6.slice/crio-conmon-4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.478247 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.586172 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.633837 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-catalog-content\") pod \"25f55928-a244-44f3-83a5-b6bdf551bda6\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.633876 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-utilities\") pod \"25f55928-a244-44f3-83a5-b6bdf551bda6\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.633972 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v87lj\" (UniqueName: \"kubernetes.io/projected/25f55928-a244-44f3-83a5-b6bdf551bda6-kube-api-access-v87lj\") pod \"25f55928-a244-44f3-83a5-b6bdf551bda6\" (UID: \"25f55928-a244-44f3-83a5-b6bdf551bda6\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.635876 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-utilities" (OuterVolumeSpecName: "utilities") pod "25f55928-a244-44f3-83a5-b6bdf551bda6" (UID: "25f55928-a244-44f3-83a5-b6bdf551bda6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.641532 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f55928-a244-44f3-83a5-b6bdf551bda6-kube-api-access-v87lj" (OuterVolumeSpecName: "kube-api-access-v87lj") pod "25f55928-a244-44f3-83a5-b6bdf551bda6" (UID: "25f55928-a244-44f3-83a5-b6bdf551bda6"). InnerVolumeSpecName "kube-api-access-v87lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.659620 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-df7774cfb-gpckj"] Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.659865 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d32e28-7549-499c-a0bb-5ab789653d5e" containerName="pruner" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.659878 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d32e28-7549-499c-a0bb-5ab789653d5e" containerName="pruner" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.659909 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerName="extract-content" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.659916 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerName="extract-content" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.659926 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerName="extract-content" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.659932 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerName="extract-content" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.659939 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fd4241f-b523-4d66-bcdb-c3bb691765c9" containerName="oauth-openshift" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.659945 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fd4241f-b523-4d66-bcdb-c3bb691765c9" containerName="oauth-openshift" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.659952 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerName="extract-utilities" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.659959 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerName="extract-utilities" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.659990 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerName="extract-content" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.659996 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerName="extract-content" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.660003 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660009 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.660017 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660022 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.660030 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerName="extract-content" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660036 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerName="extract-content" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.660062 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerName="extract-utilities" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660068 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerName="extract-utilities" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.660077 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660083 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.660089 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerName="extract-utilities" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660095 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerName="extract-utilities" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.660103 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerName="extract-utilities" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660109 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerName="extract-utilities" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.660118 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660142 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660246 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6b2fd5-d040-4be5-afb1-a375bddd3e88" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660256 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d32e28-7549-499c-a0bb-5ab789653d5e" containerName="pruner" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660265 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="194d73ac-fe2b-4e80-b03b-c1b780b55990" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660272 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fd4241f-b523-4d66-bcdb-c3bb691765c9" containerName="oauth-openshift" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660300 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660309 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede22568-7d51-42d6-b96d-1783c4e5b370" containerName="registry-server" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.660680 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.677782 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-df7774cfb-gpckj"] Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.718348 4874 generic.go:334] "Generic (PLEG): container finished" podID="9fd4241f-b523-4d66-bcdb-c3bb691765c9" containerID="0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8" exitCode=0 Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.718453 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" event={"ID":"9fd4241f-b523-4d66-bcdb-c3bb691765c9","Type":"ContainerDied","Data":"0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8"} Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.718675 4874 scope.go:117] "RemoveContainer" containerID="0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.718773 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" event={"ID":"9fd4241f-b523-4d66-bcdb-c3bb691765c9","Type":"ContainerDied","Data":"ebdd6071b5dc3f0f50c15dd7a89f7157d85c2d83e8eb328a3a8f9e41e0ceff64"} Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.719164 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8j9tz" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.723761 4874 generic.go:334] "Generic (PLEG): container finished" podID="25f55928-a244-44f3-83a5-b6bdf551bda6" containerID="4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7" exitCode=0 Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.723804 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chgtv" event={"ID":"25f55928-a244-44f3-83a5-b6bdf551bda6","Type":"ContainerDied","Data":"4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7"} Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.723854 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chgtv" event={"ID":"25f55928-a244-44f3-83a5-b6bdf551bda6","Type":"ContainerDied","Data":"f695035bc56b57dcd6c7f8fc2e3f5c5198706ed0269a97d26b4dab2f6db84183"} Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.723860 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chgtv" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.735736 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-router-certs\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.735809 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-serving-cert\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736136 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-ocp-branding-template\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736176 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-session\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736222 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-service-ca\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736251 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-cliconfig\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736281 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-policies\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736330 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-idp-0-file-data\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736362 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-dir\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736423 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-provider-selection\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736460 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-error\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736498 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9gf8\" (UniqueName: \"kubernetes.io/projected/9fd4241f-b523-4d66-bcdb-c3bb691765c9-kube-api-access-h9gf8\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736530 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-trusted-ca-bundle\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736566 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-login\") pod \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\" (UID: \"9fd4241f-b523-4d66-bcdb-c3bb691765c9\") " Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736748 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-audit-policies\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736808 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd041a63-dbb7-4dd5-b42f-f715f4686487-audit-dir\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736831 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-template-login\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736868 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-service-ca\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736890 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-cliconfig\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.736968 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737004 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-serving-cert\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737051 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-template-error\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737072 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737102 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737128 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-router-certs\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737152 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-session\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737179 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6m9d\" (UniqueName: \"kubernetes.io/projected/dd041a63-dbb7-4dd5-b42f-f715f4686487-kube-api-access-c6m9d\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737200 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737248 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737264 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v87lj\" (UniqueName: \"kubernetes.io/projected/25f55928-a244-44f3-83a5-b6bdf551bda6-kube-api-access-v87lj\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.737815 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.738459 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.738584 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.739057 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.739121 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.739598 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.742558 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.742737 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.742917 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.743164 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.743833 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.744093 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.744197 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.744346 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fd4241f-b523-4d66-bcdb-c3bb691765c9-kube-api-access-h9gf8" (OuterVolumeSpecName: "kube-api-access-h9gf8") pod "9fd4241f-b523-4d66-bcdb-c3bb691765c9" (UID: "9fd4241f-b523-4d66-bcdb-c3bb691765c9"). InnerVolumeSpecName "kube-api-access-h9gf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.748678 4874 scope.go:117] "RemoveContainer" containerID="0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.749161 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8\": container with ID starting with 0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8 not found: ID does not exist" containerID="0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.749200 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8"} err="failed to get container status \"0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8\": rpc error: code = NotFound desc = could not find container \"0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8\": container with ID starting with 0846f2eb4e8eb047949a104105735cd3c58d1f0d30fc25a2c6f738a80fe669e8 not found: ID does not exist" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.749253 4874 scope.go:117] "RemoveContainer" containerID="4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.762363 4874 scope.go:117] "RemoveContainer" containerID="070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.771213 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25f55928-a244-44f3-83a5-b6bdf551bda6" (UID: "25f55928-a244-44f3-83a5-b6bdf551bda6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.777624 4874 scope.go:117] "RemoveContainer" containerID="a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.797297 4874 scope.go:117] "RemoveContainer" containerID="4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.797763 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7\": container with ID starting with 4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7 not found: ID does not exist" containerID="4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.797806 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7"} err="failed to get container status \"4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7\": rpc error: code = NotFound desc = could not find container \"4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7\": container with ID starting with 4e6bb093ab81592869bb3d215fdba956487ddaa72974f7be113130776c97fcd7 not found: ID does not exist" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.797840 4874 scope.go:117] "RemoveContainer" containerID="070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.798137 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf\": container with ID starting with 070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf not found: ID does not exist" containerID="070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.798182 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf"} err="failed to get container status \"070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf\": rpc error: code = NotFound desc = could not find container \"070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf\": container with ID starting with 070c1bb6dbcaa01d78ef737b95b41d8ef036a0a7e51764ac67798ef0b37e2ccf not found: ID does not exist" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.798212 4874 scope.go:117] "RemoveContainer" containerID="a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d" Jan 22 11:44:09 crc kubenswrapper[4874]: E0122 11:44:09.798664 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d\": container with ID starting with a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d not found: ID does not exist" containerID="a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.798723 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d"} err="failed to get container status \"a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d\": rpc error: code = NotFound desc = could not find container \"a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d\": container with ID starting with a94610599ca8c298d3b9c45ba5eaf696e08b7bd8dfa7298e41a50594597af12d not found: ID does not exist" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838179 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838236 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-serving-cert\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838268 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-template-error\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838293 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838327 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838358 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-router-certs\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838390 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-session\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838449 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6m9d\" (UniqueName: \"kubernetes.io/projected/dd041a63-dbb7-4dd5-b42f-f715f4686487-kube-api-access-c6m9d\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838483 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838529 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-audit-policies\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838619 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd041a63-dbb7-4dd5-b42f-f715f4686487-audit-dir\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838650 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-template-login\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838699 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-service-ca\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838733 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-cliconfig\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838808 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838827 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838844 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838862 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838879 4874 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838895 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25f55928-a244-44f3-83a5-b6bdf551bda6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838913 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838929 4874 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9fd4241f-b523-4d66-bcdb-c3bb691765c9-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838948 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838968 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.838985 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9gf8\" (UniqueName: \"kubernetes.io/projected/9fd4241f-b523-4d66-bcdb-c3bb691765c9-kube-api-access-h9gf8\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.839001 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.839022 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.839038 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.839053 4874 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fd4241f-b523-4d66-bcdb-c3bb691765c9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.839600 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd041a63-dbb7-4dd5-b42f-f715f4686487-audit-dir\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.839893 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-cliconfig\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.840193 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.840349 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-service-ca\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.841213 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd041a63-dbb7-4dd5-b42f-f715f4686487-audit-policies\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.842519 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-template-error\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.842638 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-router-certs\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.843702 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-template-login\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.843828 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.845555 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-serving-cert\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.845763 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.845864 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.847644 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd041a63-dbb7-4dd5-b42f-f715f4686487-v4-0-config-system-session\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.866638 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6m9d\" (UniqueName: \"kubernetes.io/projected/dd041a63-dbb7-4dd5-b42f-f715f4686487-kube-api-access-c6m9d\") pod \"oauth-openshift-df7774cfb-gpckj\" (UID: \"dd041a63-dbb7-4dd5-b42f-f715f4686487\") " pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:09 crc kubenswrapper[4874]: I0122 11:44:09.994345 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:10 crc kubenswrapper[4874]: I0122 11:44:10.098235 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chgtv"] Jan 22 11:44:10 crc kubenswrapper[4874]: I0122 11:44:10.104209 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chgtv"] Jan 22 11:44:10 crc kubenswrapper[4874]: I0122 11:44:10.111220 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8j9tz"] Jan 22 11:44:10 crc kubenswrapper[4874]: I0122 11:44:10.111270 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8j9tz"] Jan 22 11:44:10 crc kubenswrapper[4874]: I0122 11:44:10.463588 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-df7774cfb-gpckj"] Jan 22 11:44:10 crc kubenswrapper[4874]: W0122 11:44:10.474476 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd041a63_dbb7_4dd5_b42f_f715f4686487.slice/crio-2e609717995262733d45fa73c905d6bf626b78e039ed91e72db579d70bf67f64 WatchSource:0}: Error finding container 2e609717995262733d45fa73c905d6bf626b78e039ed91e72db579d70bf67f64: Status 404 returned error can't find the container with id 2e609717995262733d45fa73c905d6bf626b78e039ed91e72db579d70bf67f64 Jan 22 11:44:10 crc kubenswrapper[4874]: I0122 11:44:10.729687 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25f55928-a244-44f3-83a5-b6bdf551bda6" path="/var/lib/kubelet/pods/25f55928-a244-44f3-83a5-b6bdf551bda6/volumes" Jan 22 11:44:10 crc kubenswrapper[4874]: I0122 11:44:10.730632 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fd4241f-b523-4d66-bcdb-c3bb691765c9" path="/var/lib/kubelet/pods/9fd4241f-b523-4d66-bcdb-c3bb691765c9/volumes" Jan 22 11:44:10 crc kubenswrapper[4874]: I0122 11:44:10.733760 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" event={"ID":"dd041a63-dbb7-4dd5-b42f-f715f4686487","Type":"ContainerStarted","Data":"2e609717995262733d45fa73c905d6bf626b78e039ed91e72db579d70bf67f64"} Jan 22 11:44:11 crc kubenswrapper[4874]: I0122 11:44:11.739518 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" event={"ID":"dd041a63-dbb7-4dd5-b42f-f715f4686487","Type":"ContainerStarted","Data":"00324d0b7f08ac0455ed071de4cfd597cfb0cb30bd337ca733fbb70cd91bdbd9"} Jan 22 11:44:11 crc kubenswrapper[4874]: I0122 11:44:11.739874 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:11 crc kubenswrapper[4874]: I0122 11:44:11.744844 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" Jan 22 11:44:11 crc kubenswrapper[4874]: I0122 11:44:11.759759 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-df7774cfb-gpckj" podStartSLOduration=28.759736195 podStartE2EDuration="28.759736195s" podCreationTimestamp="2026-01-22 11:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:44:11.75642232 +0000 UTC m=+225.601493410" watchObservedRunningTime="2026-01-22 11:44:11.759736195 +0000 UTC m=+225.604807265" Jan 22 11:44:13 crc kubenswrapper[4874]: I0122 11:44:13.520538 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:44:13 crc kubenswrapper[4874]: I0122 11:44:13.520662 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:44:13 crc kubenswrapper[4874]: I0122 11:44:13.520731 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:44:13 crc kubenswrapper[4874]: I0122 11:44:13.521644 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:44:13 crc kubenswrapper[4874]: I0122 11:44:13.521751 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192" gracePeriod=600 Jan 22 11:44:13 crc kubenswrapper[4874]: I0122 11:44:13.763761 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192" exitCode=0 Jan 22 11:44:13 crc kubenswrapper[4874]: I0122 11:44:13.763824 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192"} Jan 22 11:44:14 crc kubenswrapper[4874]: I0122 11:44:14.776714 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"cf675adc04e058930041f77cfb016f23f15475800ec1dca3cd6db1579e71257b"} Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.613632 4874 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.614639 4874 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.614802 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.615019 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb" gracePeriod=15 Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.615079 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d" gracePeriod=15 Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.615013 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e" gracePeriod=15 Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.615168 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88" gracePeriod=15 Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.615152 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea" gracePeriod=15 Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.615733 4874 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:44:15 crc kubenswrapper[4874]: E0122 11:44:15.615928 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.615949 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 11:44:15 crc kubenswrapper[4874]: E0122 11:44:15.615965 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.615974 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 11:44:15 crc kubenswrapper[4874]: E0122 11:44:15.615986 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.615995 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 11:44:15 crc kubenswrapper[4874]: E0122 11:44:15.616008 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616016 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 11:44:15 crc kubenswrapper[4874]: E0122 11:44:15.616029 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616038 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 11:44:15 crc kubenswrapper[4874]: E0122 11:44:15.616054 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616064 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 11:44:15 crc kubenswrapper[4874]: E0122 11:44:15.616074 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616083 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616208 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616221 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616233 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616244 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616258 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.616267 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.725739 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.725787 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.725873 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.725903 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.725923 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.725954 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.725985 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.726031 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.783113 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.785023 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.785913 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e" exitCode=0 Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.785945 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88" exitCode=0 Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.785954 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d" exitCode=0 Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.785961 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea" exitCode=2 Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.786034 4874 scope.go:117] "RemoveContainer" containerID="ef9dd0379be6fda0ab37fd98b2f8118e8ee772c2293b312689a2f14816024782" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.826763 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.826835 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.826856 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.826888 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.826909 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.826960 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.826968 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.826918 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.827003 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.827058 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.827066 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.827094 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.827216 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.827323 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.827333 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:15 crc kubenswrapper[4874]: I0122 11:44:15.827463 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:16 crc kubenswrapper[4874]: I0122 11:44:16.721286 4874 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:16 crc kubenswrapper[4874]: I0122 11:44:16.796363 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 11:44:16 crc kubenswrapper[4874]: I0122 11:44:16.800291 4874 generic.go:334] "Generic (PLEG): container finished" podID="b1c5453e-ced5-4d10-b696-df2a76b6a783" containerID="42a935729ff60b97150afb87bbec2f6ec94b10ae9d811cc58f762b075b789f67" exitCode=0 Jan 22 11:44:16 crc kubenswrapper[4874]: I0122 11:44:16.800368 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1c5453e-ced5-4d10-b696-df2a76b6a783","Type":"ContainerDied","Data":"42a935729ff60b97150afb87bbec2f6ec94b10ae9d811cc58f762b075b789f67"} Jan 22 11:44:16 crc kubenswrapper[4874]: I0122 11:44:16.801427 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:17 crc kubenswrapper[4874]: E0122 11:44:17.753238 4874 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" volumeName="registry-storage" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.033154 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.034098 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.057933 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c5453e-ced5-4d10-b696-df2a76b6a783-kube-api-access\") pod \"b1c5453e-ced5-4d10-b696-df2a76b6a783\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.058059 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-kubelet-dir\") pod \"b1c5453e-ced5-4d10-b696-df2a76b6a783\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.058101 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-var-lock\") pod \"b1c5453e-ced5-4d10-b696-df2a76b6a783\" (UID: \"b1c5453e-ced5-4d10-b696-df2a76b6a783\") " Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.058552 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1c5453e-ced5-4d10-b696-df2a76b6a783" (UID: "b1c5453e-ced5-4d10-b696-df2a76b6a783"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.058572 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-var-lock" (OuterVolumeSpecName: "var-lock") pod "b1c5453e-ced5-4d10-b696-df2a76b6a783" (UID: "b1c5453e-ced5-4d10-b696-df2a76b6a783"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.063688 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c5453e-ced5-4d10-b696-df2a76b6a783-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1c5453e-ced5-4d10-b696-df2a76b6a783" (UID: "b1c5453e-ced5-4d10-b696-df2a76b6a783"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.098194 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.099190 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.100054 4874 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.101863 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.159488 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.159884 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.160159 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.159695 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.159945 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.160269 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.161885 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c5453e-ced5-4d10-b696-df2a76b6a783-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.162076 4874 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.162210 4874 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c5453e-ced5-4d10-b696-df2a76b6a783-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.162339 4874 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.162496 4874 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.162640 4874 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.726874 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.815306 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b1c5453e-ced5-4d10-b696-df2a76b6a783","Type":"ContainerDied","Data":"ba05ca2e002e9c3237d941c91912a43cc5d0958363ae275896c78814a9a08db4"} Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.815696 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba05ca2e002e9c3237d941c91912a43cc5d0958363ae275896c78814a9a08db4" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.815374 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.818802 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.819116 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.820387 4874 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb" exitCode=0 Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.820479 4874 scope.go:117] "RemoveContainer" containerID="18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.820537 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.821498 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.821716 4874 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.824042 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.824214 4874 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.848955 4874 scope.go:117] "RemoveContainer" containerID="d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.863371 4874 scope.go:117] "RemoveContainer" containerID="9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.877829 4874 scope.go:117] "RemoveContainer" containerID="1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.890452 4874 scope.go:117] "RemoveContainer" containerID="cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.912869 4874 scope.go:117] "RemoveContainer" containerID="a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.929414 4874 scope.go:117] "RemoveContainer" containerID="18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e" Jan 22 11:44:18 crc kubenswrapper[4874]: E0122 11:44:18.929845 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\": container with ID starting with 18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e not found: ID does not exist" containerID="18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.929891 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e"} err="failed to get container status \"18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\": rpc error: code = NotFound desc = could not find container \"18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e\": container with ID starting with 18ef891e18831de897e651335dbab8089da981d5b7ed6663fd98fc78e2f9a78e not found: ID does not exist" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.929920 4874 scope.go:117] "RemoveContainer" containerID="d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88" Jan 22 11:44:18 crc kubenswrapper[4874]: E0122 11:44:18.930344 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\": container with ID starting with d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88 not found: ID does not exist" containerID="d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.930374 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88"} err="failed to get container status \"d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\": rpc error: code = NotFound desc = could not find container \"d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88\": container with ID starting with d67ea1437424158afdd37b9c6c83aa949691a12bfb9fc02403b84782f8581c88 not found: ID does not exist" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.930392 4874 scope.go:117] "RemoveContainer" containerID="9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d" Jan 22 11:44:18 crc kubenswrapper[4874]: E0122 11:44:18.934076 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\": container with ID starting with 9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d not found: ID does not exist" containerID="9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.934143 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d"} err="failed to get container status \"9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\": rpc error: code = NotFound desc = could not find container \"9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d\": container with ID starting with 9df7a5a3d92759dcbbee7531933f0acb22a64c50f61c755349b62b014b1efd1d not found: ID does not exist" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.934195 4874 scope.go:117] "RemoveContainer" containerID="1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea" Jan 22 11:44:18 crc kubenswrapper[4874]: E0122 11:44:18.934685 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\": container with ID starting with 1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea not found: ID does not exist" containerID="1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.934753 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea"} err="failed to get container status \"1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\": rpc error: code = NotFound desc = could not find container \"1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea\": container with ID starting with 1db3356e896765e6aa09a9812c36a172116aec98395c8a9901937823555795ea not found: ID does not exist" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.934786 4874 scope.go:117] "RemoveContainer" containerID="cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb" Jan 22 11:44:18 crc kubenswrapper[4874]: E0122 11:44:18.935153 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\": container with ID starting with cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb not found: ID does not exist" containerID="cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.935180 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb"} err="failed to get container status \"cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\": rpc error: code = NotFound desc = could not find container \"cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb\": container with ID starting with cae6eda3d4ce208d450cd707291f2aeafb2b580500e3bd85fadf6e079dec9ceb not found: ID does not exist" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.935196 4874 scope.go:117] "RemoveContainer" containerID="a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1" Jan 22 11:44:18 crc kubenswrapper[4874]: E0122 11:44:18.935622 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\": container with ID starting with a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1 not found: ID does not exist" containerID="a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1" Jan 22 11:44:18 crc kubenswrapper[4874]: I0122 11:44:18.935660 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1"} err="failed to get container status \"a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\": rpc error: code = NotFound desc = could not find container \"a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1\": container with ID starting with a8fc1c5897cb1005db092fb9109a9687fa2d55c7bdd3e81cc8b2cbb68f01dbe1 not found: ID does not exist" Jan 22 11:44:20 crc kubenswrapper[4874]: E0122 11:44:20.654578 4874 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:20 crc kubenswrapper[4874]: I0122 11:44:20.655273 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:20 crc kubenswrapper[4874]: E0122 11:44:20.686167 4874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d0af16e9c6bbb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:44:20.685614011 +0000 UTC m=+234.530685091,LastTimestamp:2026-01-22 11:44:20.685614011 +0000 UTC m=+234.530685091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:44:20 crc kubenswrapper[4874]: I0122 11:44:20.833977 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e49a6e4f262e8fee29b4eb5addf749ce3f57dfda52784b0c23b853a78fbd7d52"} Jan 22 11:44:21 crc kubenswrapper[4874]: I0122 11:44:21.841413 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7"} Jan 22 11:44:21 crc kubenswrapper[4874]: E0122 11:44:21.842200 4874 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:21 crc kubenswrapper[4874]: I0122 11:44:21.842438 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:21 crc kubenswrapper[4874]: E0122 11:44:21.856754 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:21 crc kubenswrapper[4874]: E0122 11:44:21.856923 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:21 crc kubenswrapper[4874]: E0122 11:44:21.857173 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:21 crc kubenswrapper[4874]: E0122 11:44:21.857615 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:21 crc kubenswrapper[4874]: E0122 11:44:21.858097 4874 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:21 crc kubenswrapper[4874]: I0122 11:44:21.858124 4874 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 11:44:21 crc kubenswrapper[4874]: E0122 11:44:21.858532 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="200ms" Jan 22 11:44:22 crc kubenswrapper[4874]: E0122 11:44:22.059517 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="400ms" Jan 22 11:44:22 crc kubenswrapper[4874]: E0122 11:44:22.460914 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="800ms" Jan 22 11:44:22 crc kubenswrapper[4874]: E0122 11:44:22.849255 4874 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:44:23 crc kubenswrapper[4874]: E0122 11:44:23.071100 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:44:23Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:44:23Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:44:23Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:44:23Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:23 crc kubenswrapper[4874]: E0122 11:44:23.071768 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:23 crc kubenswrapper[4874]: E0122 11:44:23.072231 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:23 crc kubenswrapper[4874]: E0122 11:44:23.072537 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:23 crc kubenswrapper[4874]: E0122 11:44:23.072981 4874 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:23 crc kubenswrapper[4874]: E0122 11:44:23.073011 4874 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:44:23 crc kubenswrapper[4874]: E0122 11:44:23.262373 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="1.6s" Jan 22 11:44:24 crc kubenswrapper[4874]: E0122 11:44:24.863729 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="3.2s" Jan 22 11:44:25 crc kubenswrapper[4874]: E0122 11:44:25.626721 4874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d0af16e9c6bbb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:44:20.685614011 +0000 UTC m=+234.530685091,LastTimestamp:2026-01-22 11:44:20.685614011 +0000 UTC m=+234.530685091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:44:26 crc kubenswrapper[4874]: I0122 11:44:26.720051 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:27 crc kubenswrapper[4874]: I0122 11:44:27.715144 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:27 crc kubenswrapper[4874]: I0122 11:44:27.716394 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:27 crc kubenswrapper[4874]: I0122 11:44:27.738356 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:27 crc kubenswrapper[4874]: I0122 11:44:27.738441 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:27 crc kubenswrapper[4874]: E0122 11:44:27.738971 4874 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:27 crc kubenswrapper[4874]: I0122 11:44:27.739629 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:27 crc kubenswrapper[4874]: W0122 11:44:27.778779 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-f3488b4f2fc617bfa2217b39cd9de34b2d359400f65ba521d8ccda0f4051f4b6 WatchSource:0}: Error finding container f3488b4f2fc617bfa2217b39cd9de34b2d359400f65ba521d8ccda0f4051f4b6: Status 404 returned error can't find the container with id f3488b4f2fc617bfa2217b39cd9de34b2d359400f65ba521d8ccda0f4051f4b6 Jan 22 11:44:27 crc kubenswrapper[4874]: I0122 11:44:27.880234 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f3488b4f2fc617bfa2217b39cd9de34b2d359400f65ba521d8ccda0f4051f4b6"} Jan 22 11:44:28 crc kubenswrapper[4874]: E0122 11:44:28.065338 4874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="6.4s" Jan 22 11:44:28 crc kubenswrapper[4874]: I0122 11:44:28.888298 4874 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fd0a1ae174bc904842bd2821353b4de9bffa6edc8f0e47e624553679d131e045" exitCode=0 Jan 22 11:44:28 crc kubenswrapper[4874]: I0122 11:44:28.888379 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fd0a1ae174bc904842bd2821353b4de9bffa6edc8f0e47e624553679d131e045"} Jan 22 11:44:28 crc kubenswrapper[4874]: I0122 11:44:28.888704 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:28 crc kubenswrapper[4874]: I0122 11:44:28.888766 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:28 crc kubenswrapper[4874]: E0122 11:44:28.889268 4874 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:28 crc kubenswrapper[4874]: I0122 11:44:28.889324 4874 status_manager.go:851] "Failed to get status for pod" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Jan 22 11:44:29 crc kubenswrapper[4874]: E0122 11:44:29.113340 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:44:29 crc kubenswrapper[4874]: I0122 11:44:29.909213 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 11:44:29 crc kubenswrapper[4874]: I0122 11:44:29.909573 4874 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793" exitCode=1 Jan 22 11:44:29 crc kubenswrapper[4874]: I0122 11:44:29.909628 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793"} Jan 22 11:44:29 crc kubenswrapper[4874]: I0122 11:44:29.910065 4874 scope.go:117] "RemoveContainer" containerID="6d285287ccdbe99a7eed164ef995aab026c7088d89680574ad45a7f1fe24a793" Jan 22 11:44:29 crc kubenswrapper[4874]: I0122 11:44:29.922257 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ce3b1f8042cf273d0bcd53a00c902433a2ca31502d16f33d515aa09c8365791c"} Jan 22 11:44:29 crc kubenswrapper[4874]: I0122 11:44:29.922300 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d98d6525724c83080512bcbd4c829515e848026a175beb67959da73f4a62f304"} Jan 22 11:44:29 crc kubenswrapper[4874]: I0122 11:44:29.922317 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df3c94ef87e3ca81fb4b8203fe451b7723686c560ceb594466daef05d9d8504b"} Jan 22 11:44:29 crc kubenswrapper[4874]: I0122 11:44:29.922328 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"107dd77e930b33f518438417063b199a81c13785679029c5025e4ac7dec019fd"} Jan 22 11:44:30 crc kubenswrapper[4874]: I0122 11:44:30.929629 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 11:44:30 crc kubenswrapper[4874]: I0122 11:44:30.929981 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0713db3892c7dafcf5a96b65709f3c24ec50d410cf96df8562bfe588a57586c4"} Jan 22 11:44:30 crc kubenswrapper[4874]: I0122 11:44:30.932889 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5a781d7d94c394dd791a3ac354d65cfece4064d9db0f75ee05ad177d791a33ee"} Jan 22 11:44:30 crc kubenswrapper[4874]: I0122 11:44:30.933089 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:30 crc kubenswrapper[4874]: I0122 11:44:30.933188 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:30 crc kubenswrapper[4874]: I0122 11:44:30.933206 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:32 crc kubenswrapper[4874]: I0122 11:44:32.630904 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:44:32 crc kubenswrapper[4874]: I0122 11:44:32.739827 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:32 crc kubenswrapper[4874]: I0122 11:44:32.739921 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:32 crc kubenswrapper[4874]: I0122 11:44:32.747003 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:34 crc kubenswrapper[4874]: I0122 11:44:34.904946 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:44:34 crc kubenswrapper[4874]: I0122 11:44:34.906064 4874 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 11:44:34 crc kubenswrapper[4874]: I0122 11:44:34.906115 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 11:44:35 crc kubenswrapper[4874]: I0122 11:44:35.941128 4874 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:35 crc kubenswrapper[4874]: I0122 11:44:35.957712 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:35 crc kubenswrapper[4874]: I0122 11:44:35.957743 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:35 crc kubenswrapper[4874]: I0122 11:44:35.961813 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:36 crc kubenswrapper[4874]: I0122 11:44:36.736327 4874 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9cb63f2a-a5c9-41c5-ab57-dcdfc7e42870" Jan 22 11:44:36 crc kubenswrapper[4874]: I0122 11:44:36.970099 4874 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:36 crc kubenswrapper[4874]: I0122 11:44:36.970466 4874 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d78bedba-59c5-4a8a-89c2-4414b15e80a7" Jan 22 11:44:36 crc kubenswrapper[4874]: I0122 11:44:36.977059 4874 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9cb63f2a-a5c9-41c5-ab57-dcdfc7e42870" Jan 22 11:44:44 crc kubenswrapper[4874]: I0122 11:44:44.904647 4874 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 11:44:44 crc kubenswrapper[4874]: I0122 11:44:44.905519 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 11:44:46 crc kubenswrapper[4874]: I0122 11:44:46.005379 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 11:44:46 crc kubenswrapper[4874]: I0122 11:44:46.744901 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 11:44:46 crc kubenswrapper[4874]: I0122 11:44:46.830443 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 11:44:47 crc kubenswrapper[4874]: I0122 11:44:47.143799 4874 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 11:44:47 crc kubenswrapper[4874]: I0122 11:44:47.146666 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 11:44:47 crc kubenswrapper[4874]: I0122 11:44:47.378490 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 11:44:47 crc kubenswrapper[4874]: I0122 11:44:47.476919 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 11:44:47 crc kubenswrapper[4874]: I0122 11:44:47.548045 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 11:44:47 crc kubenswrapper[4874]: I0122 11:44:47.904584 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 11:44:48 crc kubenswrapper[4874]: I0122 11:44:48.521387 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 11:44:48 crc kubenswrapper[4874]: I0122 11:44:48.584377 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 11:44:48 crc kubenswrapper[4874]: I0122 11:44:48.656854 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 11:44:48 crc kubenswrapper[4874]: I0122 11:44:48.737483 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 11:44:48 crc kubenswrapper[4874]: I0122 11:44:48.972756 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 11:44:48 crc kubenswrapper[4874]: I0122 11:44:48.981182 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.017030 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.021229 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.231672 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.282292 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.420169 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.439283 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.510420 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.599869 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.605144 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.718024 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.768259 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.768496 4874 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 11:44:49 crc kubenswrapper[4874]: I0122 11:44:49.959043 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.163970 4874 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.178861 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.211600 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.234311 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.237870 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.239040 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.241004 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.260942 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.633822 4874 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.639655 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.639726 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.654707 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.662158 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.667648 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.676909 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.676887675 podStartE2EDuration="15.676887675s" podCreationTimestamp="2026-01-22 11:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:44:50.672339352 +0000 UTC m=+264.517410462" watchObservedRunningTime="2026-01-22 11:44:50.676887675 +0000 UTC m=+264.521958755" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.805219 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.812255 4874 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.825547 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.852764 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.877712 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.890571 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.912774 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.917910 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 11:44:50 crc kubenswrapper[4874]: I0122 11:44:50.925762 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.053621 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.062513 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.076060 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.144871 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.239211 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.288369 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.380557 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.391472 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.395338 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.473074 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.486471 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.489111 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.544278 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.555991 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.612920 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.769532 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.790500 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.874481 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.899571 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.900897 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.993345 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 11:44:51 crc kubenswrapper[4874]: I0122 11:44:51.996693 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.024024 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.087881 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.130473 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.179376 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.182876 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.244808 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.267092 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.381374 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.527458 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.558571 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.561225 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.659206 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.738329 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.802704 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.827202 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.912532 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.937492 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.978341 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 11:44:52 crc kubenswrapper[4874]: I0122 11:44:52.985451 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.005140 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.005457 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.057488 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.077727 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.123263 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.126156 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.137368 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.156657 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.168838 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.191218 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.201860 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.227171 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.233213 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.316757 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.325780 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.478776 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.499589 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.534334 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.684673 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.702435 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.702902 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.717541 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.751423 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.760862 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.791203 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.851200 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.857007 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.942910 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.979476 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 11:44:53 crc kubenswrapper[4874]: I0122 11:44:53.984409 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.000008 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.087950 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.213633 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.214450 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.244766 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.317439 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.319465 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.385482 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.421682 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.448040 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.570666 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.659155 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.694479 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.845118 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.910605 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:44:54 crc kubenswrapper[4874]: I0122 11:44:54.916200 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.032476 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.137988 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.187683 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.228548 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.276615 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.315053 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.316314 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.348100 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.379556 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.655104 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.733042 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.886997 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 11:44:55 crc kubenswrapper[4874]: I0122 11:44:55.903743 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.067678 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.090691 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.112202 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.200361 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.203080 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.225729 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.250708 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.347740 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.421425 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.502288 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.515770 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.578135 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.706686 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.895056 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 11:44:56 crc kubenswrapper[4874]: I0122 11:44:56.945093 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.002271 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.113316 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.128991 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.137840 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.167937 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.190420 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.225738 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.252769 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.261954 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.350993 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.401165 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.402003 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.450370 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.510095 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.565741 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.571160 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.605039 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.612477 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.721682 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.783945 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.843530 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.921963 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.958832 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 11:44:57 crc kubenswrapper[4874]: I0122 11:44:57.969512 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.185230 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.295053 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.408230 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.417437 4874 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.427933 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.433576 4874 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.433809 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7" gracePeriod=5 Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.450431 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.477563 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.554143 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.614100 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.631734 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.705133 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.781918 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.805331 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.815127 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.848191 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.875714 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 11:44:58 crc kubenswrapper[4874]: I0122 11:44:58.891199 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.035429 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.221746 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.248643 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.304004 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.349628 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.351915 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.369043 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.371284 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.438117 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.477528 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.477531 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.550582 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.551881 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.601538 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.677637 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.729861 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.745824 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.776170 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.860891 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 11:44:59 crc kubenswrapper[4874]: I0122 11:44:59.949274 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 11:45:00 crc kubenswrapper[4874]: I0122 11:45:00.138308 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 11:45:00 crc kubenswrapper[4874]: I0122 11:45:00.248206 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 11:45:00 crc kubenswrapper[4874]: I0122 11:45:00.321500 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 11:45:00 crc kubenswrapper[4874]: I0122 11:45:00.494622 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 11:45:00 crc kubenswrapper[4874]: I0122 11:45:00.495159 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 11:45:00 crc kubenswrapper[4874]: I0122 11:45:00.817489 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 11:45:00 crc kubenswrapper[4874]: I0122 11:45:00.869170 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.091998 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.184648 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.251108 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.293889 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.329444 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.384959 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.403192 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.449784 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.560804 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.893893 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 11:45:01 crc kubenswrapper[4874]: I0122 11:45:01.899273 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 11:45:02 crc kubenswrapper[4874]: I0122 11:45:02.152950 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 11:45:02 crc kubenswrapper[4874]: I0122 11:45:02.378898 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 11:45:02 crc kubenswrapper[4874]: I0122 11:45:02.397924 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 11:45:02 crc kubenswrapper[4874]: I0122 11:45:02.449597 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 11:45:02 crc kubenswrapper[4874]: I0122 11:45:02.500856 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 11:45:02 crc kubenswrapper[4874]: I0122 11:45:02.573470 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 11:45:02 crc kubenswrapper[4874]: I0122 11:45:02.659568 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 11:45:02 crc kubenswrapper[4874]: I0122 11:45:02.704748 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 11:45:03 crc kubenswrapper[4874]: I0122 11:45:03.205471 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 11:45:03 crc kubenswrapper[4874]: I0122 11:45:03.359467 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 11:45:03 crc kubenswrapper[4874]: I0122 11:45:03.686292 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.000325 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.000389 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.112501 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.112580 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.112598 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.112667 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.112719 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.113052 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.113318 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.113332 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.113434 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.119571 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.123034 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.123101 4874 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7" exitCode=137 Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.123160 4874 scope.go:117] "RemoveContainer" containerID="7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.123211 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.154876 4874 scope.go:117] "RemoveContainer" containerID="7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7" Jan 22 11:45:04 crc kubenswrapper[4874]: E0122 11:45:04.155376 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7\": container with ID starting with 7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7 not found: ID does not exist" containerID="7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.155438 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7"} err="failed to get container status \"7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7\": rpc error: code = NotFound desc = could not find container \"7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7\": container with ID starting with 7b60b861ffc1abe331ac35bff6d3cb66212140990112c08598f13b185feceff7 not found: ID does not exist" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.214483 4874 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.214516 4874 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.214525 4874 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.214533 4874 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.214541 4874 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.632983 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 11:45:04 crc kubenswrapper[4874]: I0122 11:45:04.728149 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.714458 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn"] Jan 22 11:45:11 crc kubenswrapper[4874]: E0122 11:45:11.715581 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.715599 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 11:45:11 crc kubenswrapper[4874]: E0122 11:45:11.715612 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" containerName="installer" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.715619 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" containerName="installer" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.715741 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.715755 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c5453e-ced5-4d10-b696-df2a76b6a783" containerName="installer" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.716656 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.719077 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.719181 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.731599 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn"] Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.811368 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce45c6ff-601e-4b12-97b6-4737304db2d7-config-volume\") pod \"collect-profiles-29484705-xlhwn\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.811681 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txdrs\" (UniqueName: \"kubernetes.io/projected/ce45c6ff-601e-4b12-97b6-4737304db2d7-kube-api-access-txdrs\") pod \"collect-profiles-29484705-xlhwn\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.811778 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce45c6ff-601e-4b12-97b6-4737304db2d7-secret-volume\") pod \"collect-profiles-29484705-xlhwn\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.913198 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txdrs\" (UniqueName: \"kubernetes.io/projected/ce45c6ff-601e-4b12-97b6-4737304db2d7-kube-api-access-txdrs\") pod \"collect-profiles-29484705-xlhwn\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.913587 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce45c6ff-601e-4b12-97b6-4737304db2d7-secret-volume\") pod \"collect-profiles-29484705-xlhwn\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.913670 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce45c6ff-601e-4b12-97b6-4737304db2d7-config-volume\") pod \"collect-profiles-29484705-xlhwn\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.914628 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce45c6ff-601e-4b12-97b6-4737304db2d7-config-volume\") pod \"collect-profiles-29484705-xlhwn\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.919218 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce45c6ff-601e-4b12-97b6-4737304db2d7-secret-volume\") pod \"collect-profiles-29484705-xlhwn\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:11 crc kubenswrapper[4874]: I0122 11:45:11.940291 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txdrs\" (UniqueName: \"kubernetes.io/projected/ce45c6ff-601e-4b12-97b6-4737304db2d7-kube-api-access-txdrs\") pod \"collect-profiles-29484705-xlhwn\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:12 crc kubenswrapper[4874]: I0122 11:45:12.033359 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:12 crc kubenswrapper[4874]: I0122 11:45:12.402838 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn"] Jan 22 11:45:13 crc kubenswrapper[4874]: I0122 11:45:13.179709 4874 generic.go:334] "Generic (PLEG): container finished" podID="ce45c6ff-601e-4b12-97b6-4737304db2d7" containerID="e4f9ca4f65548ceb4a0c2ab28cc0dcb70d2bfed6abf0e18366dac77436253610" exitCode=0 Jan 22 11:45:13 crc kubenswrapper[4874]: I0122 11:45:13.180023 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" event={"ID":"ce45c6ff-601e-4b12-97b6-4737304db2d7","Type":"ContainerDied","Data":"e4f9ca4f65548ceb4a0c2ab28cc0dcb70d2bfed6abf0e18366dac77436253610"} Jan 22 11:45:13 crc kubenswrapper[4874]: I0122 11:45:13.180063 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" event={"ID":"ce45c6ff-601e-4b12-97b6-4737304db2d7","Type":"ContainerStarted","Data":"0ad7d6ea5a30e76ca03ea0e0ab963e27f80e9b8ce1ebe3c88dec81c8a4abfc22"} Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.390331 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.545081 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce45c6ff-601e-4b12-97b6-4737304db2d7-secret-volume\") pod \"ce45c6ff-601e-4b12-97b6-4737304db2d7\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.545457 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txdrs\" (UniqueName: \"kubernetes.io/projected/ce45c6ff-601e-4b12-97b6-4737304db2d7-kube-api-access-txdrs\") pod \"ce45c6ff-601e-4b12-97b6-4737304db2d7\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.545664 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce45c6ff-601e-4b12-97b6-4737304db2d7-config-volume\") pod \"ce45c6ff-601e-4b12-97b6-4737304db2d7\" (UID: \"ce45c6ff-601e-4b12-97b6-4737304db2d7\") " Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.546177 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce45c6ff-601e-4b12-97b6-4737304db2d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce45c6ff-601e-4b12-97b6-4737304db2d7" (UID: "ce45c6ff-601e-4b12-97b6-4737304db2d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.550035 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce45c6ff-601e-4b12-97b6-4737304db2d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce45c6ff-601e-4b12-97b6-4737304db2d7" (UID: "ce45c6ff-601e-4b12-97b6-4737304db2d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.550331 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce45c6ff-601e-4b12-97b6-4737304db2d7-kube-api-access-txdrs" (OuterVolumeSpecName: "kube-api-access-txdrs") pod "ce45c6ff-601e-4b12-97b6-4737304db2d7" (UID: "ce45c6ff-601e-4b12-97b6-4737304db2d7"). InnerVolumeSpecName "kube-api-access-txdrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.647037 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce45c6ff-601e-4b12-97b6-4737304db2d7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.647083 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txdrs\" (UniqueName: \"kubernetes.io/projected/ce45c6ff-601e-4b12-97b6-4737304db2d7-kube-api-access-txdrs\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:14 crc kubenswrapper[4874]: I0122 11:45:14.647102 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce45c6ff-601e-4b12-97b6-4737304db2d7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:15 crc kubenswrapper[4874]: I0122 11:45:15.190114 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" event={"ID":"ce45c6ff-601e-4b12-97b6-4737304db2d7","Type":"ContainerDied","Data":"0ad7d6ea5a30e76ca03ea0e0ab963e27f80e9b8ce1ebe3c88dec81c8a4abfc22"} Jan 22 11:45:15 crc kubenswrapper[4874]: I0122 11:45:15.190428 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad7d6ea5a30e76ca03ea0e0ab963e27f80e9b8ce1ebe3c88dec81c8a4abfc22" Jan 22 11:45:15 crc kubenswrapper[4874]: I0122 11:45:15.190204 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn" Jan 22 11:45:20 crc kubenswrapper[4874]: I0122 11:45:20.225382 4874 generic.go:334] "Generic (PLEG): container finished" podID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerID="a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3" exitCode=0 Jan 22 11:45:20 crc kubenswrapper[4874]: I0122 11:45:20.225644 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" event={"ID":"213d34f5-75cd-459c-9e56-2938fe5e3950","Type":"ContainerDied","Data":"a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3"} Jan 22 11:45:20 crc kubenswrapper[4874]: I0122 11:45:20.227723 4874 scope.go:117] "RemoveContainer" containerID="a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3" Jan 22 11:45:21 crc kubenswrapper[4874]: I0122 11:45:21.236599 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" event={"ID":"213d34f5-75cd-459c-9e56-2938fe5e3950","Type":"ContainerStarted","Data":"a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb"} Jan 22 11:45:21 crc kubenswrapper[4874]: I0122 11:45:21.237369 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:45:21 crc kubenswrapper[4874]: I0122 11:45:21.243495 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.399838 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swcnq"] Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.400556 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" podUID="f5464a23-ec80-4717-bfe0-6efeab811853" containerName="controller-manager" containerID="cri-o://fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5" gracePeriod=30 Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.484108 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87"] Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.484358 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" podUID="65f71c2e-ab34-4d33-905f-609555dab78c" containerName="route-controller-manager" containerID="cri-o://84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1" gracePeriod=30 Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.839732 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.895680 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.964387 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m29d2\" (UniqueName: \"kubernetes.io/projected/65f71c2e-ab34-4d33-905f-609555dab78c-kube-api-access-m29d2\") pod \"65f71c2e-ab34-4d33-905f-609555dab78c\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.964501 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nm6v\" (UniqueName: \"kubernetes.io/projected/f5464a23-ec80-4717-bfe0-6efeab811853-kube-api-access-2nm6v\") pod \"f5464a23-ec80-4717-bfe0-6efeab811853\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.964551 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-proxy-ca-bundles\") pod \"f5464a23-ec80-4717-bfe0-6efeab811853\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.964588 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-config\") pod \"f5464a23-ec80-4717-bfe0-6efeab811853\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.964616 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65f71c2e-ab34-4d33-905f-609555dab78c-serving-cert\") pod \"65f71c2e-ab34-4d33-905f-609555dab78c\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.964652 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-client-ca\") pod \"65f71c2e-ab34-4d33-905f-609555dab78c\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.964684 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5464a23-ec80-4717-bfe0-6efeab811853-serving-cert\") pod \"f5464a23-ec80-4717-bfe0-6efeab811853\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.964829 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-client-ca\") pod \"f5464a23-ec80-4717-bfe0-6efeab811853\" (UID: \"f5464a23-ec80-4717-bfe0-6efeab811853\") " Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.965522 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f5464a23-ec80-4717-bfe0-6efeab811853" (UID: "f5464a23-ec80-4717-bfe0-6efeab811853"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.965540 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-config" (OuterVolumeSpecName: "config") pod "f5464a23-ec80-4717-bfe0-6efeab811853" (UID: "f5464a23-ec80-4717-bfe0-6efeab811853"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.965577 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-client-ca" (OuterVolumeSpecName: "client-ca") pod "f5464a23-ec80-4717-bfe0-6efeab811853" (UID: "f5464a23-ec80-4717-bfe0-6efeab811853"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.965680 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-config\") pod \"65f71c2e-ab34-4d33-905f-609555dab78c\" (UID: \"65f71c2e-ab34-4d33-905f-609555dab78c\") " Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.966233 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-client-ca" (OuterVolumeSpecName: "client-ca") pod "65f71c2e-ab34-4d33-905f-609555dab78c" (UID: "65f71c2e-ab34-4d33-905f-609555dab78c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.966853 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-config" (OuterVolumeSpecName: "config") pod "65f71c2e-ab34-4d33-905f-609555dab78c" (UID: "65f71c2e-ab34-4d33-905f-609555dab78c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.967568 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.967596 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.967609 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.967628 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5464a23-ec80-4717-bfe0-6efeab811853-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.967638 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65f71c2e-ab34-4d33-905f-609555dab78c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.970530 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5464a23-ec80-4717-bfe0-6efeab811853-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f5464a23-ec80-4717-bfe0-6efeab811853" (UID: "f5464a23-ec80-4717-bfe0-6efeab811853"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.970701 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f71c2e-ab34-4d33-905f-609555dab78c-kube-api-access-m29d2" (OuterVolumeSpecName: "kube-api-access-m29d2") pod "65f71c2e-ab34-4d33-905f-609555dab78c" (UID: "65f71c2e-ab34-4d33-905f-609555dab78c"). InnerVolumeSpecName "kube-api-access-m29d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.970698 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5464a23-ec80-4717-bfe0-6efeab811853-kube-api-access-2nm6v" (OuterVolumeSpecName: "kube-api-access-2nm6v") pod "f5464a23-ec80-4717-bfe0-6efeab811853" (UID: "f5464a23-ec80-4717-bfe0-6efeab811853"). InnerVolumeSpecName "kube-api-access-2nm6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:23 crc kubenswrapper[4874]: I0122 11:45:23.970727 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f71c2e-ab34-4d33-905f-609555dab78c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "65f71c2e-ab34-4d33-905f-609555dab78c" (UID: "65f71c2e-ab34-4d33-905f-609555dab78c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.069246 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m29d2\" (UniqueName: \"kubernetes.io/projected/65f71c2e-ab34-4d33-905f-609555dab78c-kube-api-access-m29d2\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.069279 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nm6v\" (UniqueName: \"kubernetes.io/projected/f5464a23-ec80-4717-bfe0-6efeab811853-kube-api-access-2nm6v\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.069288 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65f71c2e-ab34-4d33-905f-609555dab78c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.069297 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5464a23-ec80-4717-bfe0-6efeab811853-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.256943 4874 generic.go:334] "Generic (PLEG): container finished" podID="65f71c2e-ab34-4d33-905f-609555dab78c" containerID="84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1" exitCode=0 Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.257025 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" event={"ID":"65f71c2e-ab34-4d33-905f-609555dab78c","Type":"ContainerDied","Data":"84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1"} Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.257057 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" event={"ID":"65f71c2e-ab34-4d33-905f-609555dab78c","Type":"ContainerDied","Data":"a9c651399a50d57c614df95905597a481669874ba9ca8205cf5d6198908fbe7c"} Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.257059 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.257076 4874 scope.go:117] "RemoveContainer" containerID="84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.260672 4874 generic.go:334] "Generic (PLEG): container finished" podID="f5464a23-ec80-4717-bfe0-6efeab811853" containerID="fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5" exitCode=0 Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.260746 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" event={"ID":"f5464a23-ec80-4717-bfe0-6efeab811853","Type":"ContainerDied","Data":"fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5"} Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.260770 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.260797 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-swcnq" event={"ID":"f5464a23-ec80-4717-bfe0-6efeab811853","Type":"ContainerDied","Data":"d437fa062da853d4db086da0d6818e76b56a3a9b163a7e48bb221915ade96982"} Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.282646 4874 scope.go:117] "RemoveContainer" containerID="84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1" Jan 22 11:45:24 crc kubenswrapper[4874]: E0122 11:45:24.283214 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1\": container with ID starting with 84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1 not found: ID does not exist" containerID="84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.283252 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1"} err="failed to get container status \"84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1\": rpc error: code = NotFound desc = could not find container \"84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1\": container with ID starting with 84a7330ee9f1d0d95ea8e0bf8a052aae4a2dfa5a3e264d704406ce13a297fbb1 not found: ID does not exist" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.283280 4874 scope.go:117] "RemoveContainer" containerID="fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.311301 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87"] Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.316975 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2bn87"] Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.327773 4874 scope.go:117] "RemoveContainer" containerID="fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5" Jan 22 11:45:24 crc kubenswrapper[4874]: E0122 11:45:24.328251 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5\": container with ID starting with fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5 not found: ID does not exist" containerID="fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.328311 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swcnq"] Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.328303 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5"} err="failed to get container status \"fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5\": rpc error: code = NotFound desc = could not find container \"fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5\": container with ID starting with fff471b9ef5cb3368fda0285ee54baef17701d5ccdb0a7c909cc9a9b7b3f50f5 not found: ID does not exist" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.338462 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swcnq"] Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.707195 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2"] Jan 22 11:45:24 crc kubenswrapper[4874]: E0122 11:45:24.708099 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce45c6ff-601e-4b12-97b6-4737304db2d7" containerName="collect-profiles" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.708133 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce45c6ff-601e-4b12-97b6-4737304db2d7" containerName="collect-profiles" Jan 22 11:45:24 crc kubenswrapper[4874]: E0122 11:45:24.708160 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f71c2e-ab34-4d33-905f-609555dab78c" containerName="route-controller-manager" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.708177 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f71c2e-ab34-4d33-905f-609555dab78c" containerName="route-controller-manager" Jan 22 11:45:24 crc kubenswrapper[4874]: E0122 11:45:24.708212 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5464a23-ec80-4717-bfe0-6efeab811853" containerName="controller-manager" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.708231 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5464a23-ec80-4717-bfe0-6efeab811853" containerName="controller-manager" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.708487 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5464a23-ec80-4717-bfe0-6efeab811853" containerName="controller-manager" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.708520 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce45c6ff-601e-4b12-97b6-4737304db2d7" containerName="collect-profiles" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.708540 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f71c2e-ab34-4d33-905f-609555dab78c" containerName="route-controller-manager" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.709181 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.711594 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.711636 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7"] Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.711656 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.711841 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.712337 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.713384 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.714153 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.716589 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.720033 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.720737 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.721756 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.722233 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.724125 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.724571 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.734357 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f71c2e-ab34-4d33-905f-609555dab78c" path="/var/lib/kubelet/pods/65f71c2e-ab34-4d33-905f-609555dab78c/volumes" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.735238 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5464a23-ec80-4717-bfe0-6efeab811853" path="/var/lib/kubelet/pods/f5464a23-ec80-4717-bfe0-6efeab811853/volumes" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.735875 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7"] Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.739688 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2"] Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.783198 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-client-ca\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.783246 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-client-ca\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.783274 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9625e0-ef53-4afd-9576-a41ad312a97d-serving-cert\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.783303 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xklr5\" (UniqueName: \"kubernetes.io/projected/8ba4583a-60a2-4ace-ba80-8d804556d604-kube-api-access-xklr5\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.783322 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-config\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.783406 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-proxy-ca-bundles\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.783750 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rmmk\" (UniqueName: \"kubernetes.io/projected/9e9625e0-ef53-4afd-9576-a41ad312a97d-kube-api-access-2rmmk\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.783848 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ba4583a-60a2-4ace-ba80-8d804556d604-serving-cert\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.783922 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-config\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.788052 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.885737 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-client-ca\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.885819 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-client-ca\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.885857 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9625e0-ef53-4afd-9576-a41ad312a97d-serving-cert\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.885896 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xklr5\" (UniqueName: \"kubernetes.io/projected/8ba4583a-60a2-4ace-ba80-8d804556d604-kube-api-access-xklr5\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.885925 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-config\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.885947 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-proxy-ca-bundles\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.885992 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rmmk\" (UniqueName: \"kubernetes.io/projected/9e9625e0-ef53-4afd-9576-a41ad312a97d-kube-api-access-2rmmk\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.886017 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ba4583a-60a2-4ace-ba80-8d804556d604-serving-cert\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.886045 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-config\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.887027 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-client-ca\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.887986 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-config\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.888294 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-proxy-ca-bundles\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.889112 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-config\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.889899 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-client-ca\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.891257 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ba4583a-60a2-4ace-ba80-8d804556d604-serving-cert\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.892226 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9625e0-ef53-4afd-9576-a41ad312a97d-serving-cert\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.905674 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rmmk\" (UniqueName: \"kubernetes.io/projected/9e9625e0-ef53-4afd-9576-a41ad312a97d-kube-api-access-2rmmk\") pod \"controller-manager-789bbbcf9f-jm8f7\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:24 crc kubenswrapper[4874]: I0122 11:45:24.908187 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xklr5\" (UniqueName: \"kubernetes.io/projected/8ba4583a-60a2-4ace-ba80-8d804556d604-kube-api-access-xklr5\") pod \"route-controller-manager-5764494d47-vsfq2\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:25 crc kubenswrapper[4874]: I0122 11:45:25.086763 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:25 crc kubenswrapper[4874]: I0122 11:45:25.095713 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:25 crc kubenswrapper[4874]: I0122 11:45:25.305950 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2"] Jan 22 11:45:25 crc kubenswrapper[4874]: I0122 11:45:25.332067 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7"] Jan 22 11:45:25 crc kubenswrapper[4874]: W0122 11:45:25.334995 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e9625e0_ef53_4afd_9576_a41ad312a97d.slice/crio-d2b018a92f66b0200304d7741f10b85f1e493ae49ca6a4372b8a9605031feb95 WatchSource:0}: Error finding container d2b018a92f66b0200304d7741f10b85f1e493ae49ca6a4372b8a9605031feb95: Status 404 returned error can't find the container with id d2b018a92f66b0200304d7741f10b85f1e493ae49ca6a4372b8a9605031feb95 Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.277289 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" event={"ID":"8ba4583a-60a2-4ace-ba80-8d804556d604","Type":"ContainerStarted","Data":"21043c10d05ec41a76bd77364b386c886df31d414e1e8c2604a1659796f8c27e"} Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.277972 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.277991 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" event={"ID":"8ba4583a-60a2-4ace-ba80-8d804556d604","Type":"ContainerStarted","Data":"ef570dea7a83f0ce123b85b79bfbc070103a700770f9817e620b420f82ef7f8a"} Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.279539 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" event={"ID":"9e9625e0-ef53-4afd-9576-a41ad312a97d","Type":"ContainerStarted","Data":"5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983"} Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.279654 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" event={"ID":"9e9625e0-ef53-4afd-9576-a41ad312a97d","Type":"ContainerStarted","Data":"d2b018a92f66b0200304d7741f10b85f1e493ae49ca6a4372b8a9605031feb95"} Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.279883 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.283482 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.292345 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.296751 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" podStartSLOduration=3.296735142 podStartE2EDuration="3.296735142s" podCreationTimestamp="2026-01-22 11:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:45:26.293334594 +0000 UTC m=+300.138405674" watchObservedRunningTime="2026-01-22 11:45:26.296735142 +0000 UTC m=+300.141806212" Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.340298 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" podStartSLOduration=3.340276188 podStartE2EDuration="3.340276188s" podCreationTimestamp="2026-01-22 11:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:45:26.337819031 +0000 UTC m=+300.182890091" watchObservedRunningTime="2026-01-22 11:45:26.340276188 +0000 UTC m=+300.185347258" Jan 22 11:45:26 crc kubenswrapper[4874]: I0122 11:45:26.552882 4874 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 11:45:43 crc kubenswrapper[4874]: I0122 11:45:43.428180 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7"] Jan 22 11:45:43 crc kubenswrapper[4874]: I0122 11:45:43.431100 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" podUID="9e9625e0-ef53-4afd-9576-a41ad312a97d" containerName="controller-manager" containerID="cri-o://5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983" gracePeriod=30 Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.048753 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.200832 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-proxy-ca-bundles\") pod \"9e9625e0-ef53-4afd-9576-a41ad312a97d\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.200925 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-config\") pod \"9e9625e0-ef53-4afd-9576-a41ad312a97d\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.200983 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9625e0-ef53-4afd-9576-a41ad312a97d-serving-cert\") pod \"9e9625e0-ef53-4afd-9576-a41ad312a97d\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.201122 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rmmk\" (UniqueName: \"kubernetes.io/projected/9e9625e0-ef53-4afd-9576-a41ad312a97d-kube-api-access-2rmmk\") pod \"9e9625e0-ef53-4afd-9576-a41ad312a97d\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.201185 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-client-ca\") pod \"9e9625e0-ef53-4afd-9576-a41ad312a97d\" (UID: \"9e9625e0-ef53-4afd-9576-a41ad312a97d\") " Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.202025 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9e9625e0-ef53-4afd-9576-a41ad312a97d" (UID: "9e9625e0-ef53-4afd-9576-a41ad312a97d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.202163 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-client-ca" (OuterVolumeSpecName: "client-ca") pod "9e9625e0-ef53-4afd-9576-a41ad312a97d" (UID: "9e9625e0-ef53-4afd-9576-a41ad312a97d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.202230 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-config" (OuterVolumeSpecName: "config") pod "9e9625e0-ef53-4afd-9576-a41ad312a97d" (UID: "9e9625e0-ef53-4afd-9576-a41ad312a97d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.202831 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.202881 4874 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.202909 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9625e0-ef53-4afd-9576-a41ad312a97d-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.210526 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9625e0-ef53-4afd-9576-a41ad312a97d-kube-api-access-2rmmk" (OuterVolumeSpecName: "kube-api-access-2rmmk") pod "9e9625e0-ef53-4afd-9576-a41ad312a97d" (UID: "9e9625e0-ef53-4afd-9576-a41ad312a97d"). InnerVolumeSpecName "kube-api-access-2rmmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.214137 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9625e0-ef53-4afd-9576-a41ad312a97d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9e9625e0-ef53-4afd-9576-a41ad312a97d" (UID: "9e9625e0-ef53-4afd-9576-a41ad312a97d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.303460 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9625e0-ef53-4afd-9576-a41ad312a97d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.303523 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rmmk\" (UniqueName: \"kubernetes.io/projected/9e9625e0-ef53-4afd-9576-a41ad312a97d-kube-api-access-2rmmk\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.382545 4874 generic.go:334] "Generic (PLEG): container finished" podID="9e9625e0-ef53-4afd-9576-a41ad312a97d" containerID="5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983" exitCode=0 Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.382599 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" event={"ID":"9e9625e0-ef53-4afd-9576-a41ad312a97d","Type":"ContainerDied","Data":"5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983"} Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.382640 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" event={"ID":"9e9625e0-ef53-4afd-9576-a41ad312a97d","Type":"ContainerDied","Data":"d2b018a92f66b0200304d7741f10b85f1e493ae49ca6a4372b8a9605031feb95"} Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.382653 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.382664 4874 scope.go:117] "RemoveContainer" containerID="5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.408332 4874 scope.go:117] "RemoveContainer" containerID="5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983" Jan 22 11:45:44 crc kubenswrapper[4874]: E0122 11:45:44.409565 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983\": container with ID starting with 5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983 not found: ID does not exist" containerID="5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.409603 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983"} err="failed to get container status \"5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983\": rpc error: code = NotFound desc = could not find container \"5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983\": container with ID starting with 5111f552e769bb7d8a02337c79b11097d1e325a76160fdcaa161f52d7010f983 not found: ID does not exist" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.437481 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7"] Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.440513 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-789bbbcf9f-jm8f7"] Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.722781 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9625e0-ef53-4afd-9576-a41ad312a97d" path="/var/lib/kubelet/pods/9e9625e0-ef53-4afd-9576-a41ad312a97d/volumes" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.734281 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-869779d669-w4m58"] Jan 22 11:45:44 crc kubenswrapper[4874]: E0122 11:45:44.734490 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9625e0-ef53-4afd-9576-a41ad312a97d" containerName="controller-manager" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.734501 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9625e0-ef53-4afd-9576-a41ad312a97d" containerName="controller-manager" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.734606 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9625e0-ef53-4afd-9576-a41ad312a97d" containerName="controller-manager" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.734976 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.737379 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.737796 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.737988 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.738372 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.739204 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.741221 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.749619 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.755739 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-869779d669-w4m58"] Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.809520 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7305b4b5-c89c-4134-a5bf-f2e001268cb9-proxy-ca-bundles\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.809565 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4p2h\" (UniqueName: \"kubernetes.io/projected/7305b4b5-c89c-4134-a5bf-f2e001268cb9-kube-api-access-g4p2h\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.809594 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7305b4b5-c89c-4134-a5bf-f2e001268cb9-client-ca\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.809627 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7305b4b5-c89c-4134-a5bf-f2e001268cb9-serving-cert\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.809645 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7305b4b5-c89c-4134-a5bf-f2e001268cb9-config\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.910358 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7305b4b5-c89c-4134-a5bf-f2e001268cb9-serving-cert\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.910422 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7305b4b5-c89c-4134-a5bf-f2e001268cb9-config\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.910480 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7305b4b5-c89c-4134-a5bf-f2e001268cb9-proxy-ca-bundles\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.910506 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4p2h\" (UniqueName: \"kubernetes.io/projected/7305b4b5-c89c-4134-a5bf-f2e001268cb9-kube-api-access-g4p2h\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.910534 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7305b4b5-c89c-4134-a5bf-f2e001268cb9-client-ca\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.911558 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7305b4b5-c89c-4134-a5bf-f2e001268cb9-client-ca\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.912164 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7305b4b5-c89c-4134-a5bf-f2e001268cb9-proxy-ca-bundles\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.912809 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7305b4b5-c89c-4134-a5bf-f2e001268cb9-config\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.915311 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7305b4b5-c89c-4134-a5bf-f2e001268cb9-serving-cert\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:44 crc kubenswrapper[4874]: I0122 11:45:44.937515 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4p2h\" (UniqueName: \"kubernetes.io/projected/7305b4b5-c89c-4134-a5bf-f2e001268cb9-kube-api-access-g4p2h\") pod \"controller-manager-869779d669-w4m58\" (UID: \"7305b4b5-c89c-4134-a5bf-f2e001268cb9\") " pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:45 crc kubenswrapper[4874]: I0122 11:45:45.095656 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:45 crc kubenswrapper[4874]: I0122 11:45:45.303856 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-869779d669-w4m58"] Jan 22 11:45:45 crc kubenswrapper[4874]: W0122 11:45:45.326853 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7305b4b5_c89c_4134_a5bf_f2e001268cb9.slice/crio-e7303afdd5908ee0cf294c27cca6bd62b544b77d9d7c8178430ca1b757a66a87 WatchSource:0}: Error finding container e7303afdd5908ee0cf294c27cca6bd62b544b77d9d7c8178430ca1b757a66a87: Status 404 returned error can't find the container with id e7303afdd5908ee0cf294c27cca6bd62b544b77d9d7c8178430ca1b757a66a87 Jan 22 11:45:45 crc kubenswrapper[4874]: I0122 11:45:45.397331 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-869779d669-w4m58" event={"ID":"7305b4b5-c89c-4134-a5bf-f2e001268cb9","Type":"ContainerStarted","Data":"e7303afdd5908ee0cf294c27cca6bd62b544b77d9d7c8178430ca1b757a66a87"} Jan 22 11:45:46 crc kubenswrapper[4874]: I0122 11:45:46.404727 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-869779d669-w4m58" event={"ID":"7305b4b5-c89c-4134-a5bf-f2e001268cb9","Type":"ContainerStarted","Data":"8fe41fa666f6946fe9c91397a097869668c95495b0b14206c9f71d56bac2e3c8"} Jan 22 11:45:46 crc kubenswrapper[4874]: I0122 11:45:46.405082 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:46 crc kubenswrapper[4874]: I0122 11:45:46.409278 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-869779d669-w4m58" Jan 22 11:45:46 crc kubenswrapper[4874]: I0122 11:45:46.423348 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-869779d669-w4m58" podStartSLOduration=3.423330621 podStartE2EDuration="3.423330621s" podCreationTimestamp="2026-01-22 11:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:45:46.422881647 +0000 UTC m=+320.267952757" watchObservedRunningTime="2026-01-22 11:45:46.423330621 +0000 UTC m=+320.268401691" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.380552 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2mwt"] Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.381178 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk5qd"] Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.381492 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bk5qd" podUID="829e346d-eb89-4705-83c4-99d02fca8971" containerName="registry-server" containerID="cri-o://051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e" gracePeriod=30 Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.381764 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r2mwt" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerName="registry-server" containerID="cri-o://1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32" gracePeriod=30 Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.382899 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6dvvn"] Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.383170 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerName="marketplace-operator" containerID="cri-o://a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb" gracePeriod=30 Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.406847 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhb8k"] Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.407083 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xhb8k" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerName="registry-server" containerID="cri-o://c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a" gracePeriod=30 Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.414926 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2gqx"] Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.415145 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s2gqx" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerName="registry-server" containerID="cri-o://d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd" gracePeriod=30 Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.424743 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-st8lg"] Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.425675 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.425721 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-st8lg"] Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.492026 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n2lnd"] Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.492662 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.526191 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n2lnd"] Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.566721 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtdms\" (UniqueName: \"kubernetes.io/projected/bb25a3d3-60b1-43ae-b007-19b20c362414-kube-api-access-dtdms\") pod \"marketplace-operator-79b997595-st8lg\" (UID: \"bb25a3d3-60b1-43ae-b007-19b20c362414\") " pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.566774 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb25a3d3-60b1-43ae-b007-19b20c362414-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-st8lg\" (UID: \"bb25a3d3-60b1-43ae-b007-19b20c362414\") " pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.566808 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb25a3d3-60b1-43ae-b007-19b20c362414-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-st8lg\" (UID: \"bb25a3d3-60b1-43ae-b007-19b20c362414\") " pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.668554 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-trusted-ca\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.668607 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.668787 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtdms\" (UniqueName: \"kubernetes.io/projected/bb25a3d3-60b1-43ae-b007-19b20c362414-kube-api-access-dtdms\") pod \"marketplace-operator-79b997595-st8lg\" (UID: \"bb25a3d3-60b1-43ae-b007-19b20c362414\") " pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.668831 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-bound-sa-token\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.668871 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb25a3d3-60b1-43ae-b007-19b20c362414-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-st8lg\" (UID: \"bb25a3d3-60b1-43ae-b007-19b20c362414\") " pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.668896 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-registry-certificates\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.668929 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb25a3d3-60b1-43ae-b007-19b20c362414-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-st8lg\" (UID: \"bb25a3d3-60b1-43ae-b007-19b20c362414\") " pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.668983 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.669021 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-registry-tls\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.669036 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgfrm\" (UniqueName: \"kubernetes.io/projected/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-kube-api-access-rgfrm\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.669057 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.670974 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb25a3d3-60b1-43ae-b007-19b20c362414-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-st8lg\" (UID: \"bb25a3d3-60b1-43ae-b007-19b20c362414\") " pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.674626 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb25a3d3-60b1-43ae-b007-19b20c362414-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-st8lg\" (UID: \"bb25a3d3-60b1-43ae-b007-19b20c362414\") " pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.686129 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtdms\" (UniqueName: \"kubernetes.io/projected/bb25a3d3-60b1-43ae-b007-19b20c362414-kube-api-access-dtdms\") pod \"marketplace-operator-79b997595-st8lg\" (UID: \"bb25a3d3-60b1-43ae-b007-19b20c362414\") " pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.692330 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.742716 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.771008 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-registry-certificates\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.771078 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.771118 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-registry-tls\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.771152 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgfrm\" (UniqueName: \"kubernetes.io/projected/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-kube-api-access-rgfrm\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.771191 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-trusted-ca\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.771263 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.771326 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-bound-sa-token\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.774143 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.775816 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.778603 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-registry-tls\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.781328 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-trusted-ca\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.788589 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-registry-certificates\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.792257 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-bound-sa-token\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.796802 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgfrm\" (UniqueName: \"kubernetes.io/projected/c6b1b602-57a0-4459-bb14-e96f6fa1fe9c-kube-api-access-rgfrm\") pod \"image-registry-66df7c8f76-n2lnd\" (UID: \"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c\") " pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.827263 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:49 crc kubenswrapper[4874]: I0122 11:45:49.909679 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.067147 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.078480 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9tqj\" (UniqueName: \"kubernetes.io/projected/829e346d-eb89-4705-83c4-99d02fca8971-kube-api-access-k9tqj\") pod \"829e346d-eb89-4705-83c4-99d02fca8971\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.078600 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-catalog-content\") pod \"829e346d-eb89-4705-83c4-99d02fca8971\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.078657 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-utilities\") pod \"829e346d-eb89-4705-83c4-99d02fca8971\" (UID: \"829e346d-eb89-4705-83c4-99d02fca8971\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.079860 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-utilities" (OuterVolumeSpecName: "utilities") pod "829e346d-eb89-4705-83c4-99d02fca8971" (UID: "829e346d-eb89-4705-83c4-99d02fca8971"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.090054 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829e346d-eb89-4705-83c4-99d02fca8971-kube-api-access-k9tqj" (OuterVolumeSpecName: "kube-api-access-k9tqj") pod "829e346d-eb89-4705-83c4-99d02fca8971" (UID: "829e346d-eb89-4705-83c4-99d02fca8971"). InnerVolumeSpecName "kube-api-access-k9tqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.101195 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.102818 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.122448 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.162856 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "829e346d-eb89-4705-83c4-99d02fca8971" (UID: "829e346d-eb89-4705-83c4-99d02fca8971"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.179550 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79nw7\" (UniqueName: \"kubernetes.io/projected/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-kube-api-access-79nw7\") pod \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.179616 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-utilities\") pod \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.179769 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-catalog-content\") pod \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\" (UID: \"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.180117 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.180151 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9tqj\" (UniqueName: \"kubernetes.io/projected/829e346d-eb89-4705-83c4-99d02fca8971-kube-api-access-k9tqj\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.180164 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e346d-eb89-4705-83c4-99d02fca8971-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.180966 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-utilities" (OuterVolumeSpecName: "utilities") pod "09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" (UID: "09a47d07-9bf6-4033-8c08-cc3aef9fe4f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.181823 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-kube-api-access-79nw7" (OuterVolumeSpecName: "kube-api-access-79nw7") pod "09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" (UID: "09a47d07-9bf6-4033-8c08-cc3aef9fe4f4"). InnerVolumeSpecName "kube-api-access-79nw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.207106 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" (UID: "09a47d07-9bf6-4033-8c08-cc3aef9fe4f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.281439 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-trusted-ca\") pod \"213d34f5-75cd-459c-9e56-2938fe5e3950\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.281558 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-catalog-content\") pod \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.281639 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-operator-metrics\") pod \"213d34f5-75cd-459c-9e56-2938fe5e3950\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.281689 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2xvq\" (UniqueName: \"kubernetes.io/projected/dea4a6eb-c0b1-432a-81f2-e417250b0138-kube-api-access-w2xvq\") pod \"dea4a6eb-c0b1-432a-81f2-e417250b0138\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.281747 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-catalog-content\") pod \"dea4a6eb-c0b1-432a-81f2-e417250b0138\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.281771 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-utilities\") pod \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.281803 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf6nt\" (UniqueName: \"kubernetes.io/projected/213d34f5-75cd-459c-9e56-2938fe5e3950-kube-api-access-lf6nt\") pod \"213d34f5-75cd-459c-9e56-2938fe5e3950\" (UID: \"213d34f5-75cd-459c-9e56-2938fe5e3950\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.281858 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92txx\" (UniqueName: \"kubernetes.io/projected/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-kube-api-access-92txx\") pod \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\" (UID: \"2d07947b-508d-4f12-ba1b-2d5f24a6db2c\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.282541 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-utilities" (OuterVolumeSpecName: "utilities") pod "2d07947b-508d-4f12-ba1b-2d5f24a6db2c" (UID: "2d07947b-508d-4f12-ba1b-2d5f24a6db2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.282568 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "213d34f5-75cd-459c-9e56-2938fe5e3950" (UID: "213d34f5-75cd-459c-9e56-2938fe5e3950"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.282732 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-utilities\") pod \"dea4a6eb-c0b1-432a-81f2-e417250b0138\" (UID: \"dea4a6eb-c0b1-432a-81f2-e417250b0138\") " Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.283948 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-utilities" (OuterVolumeSpecName: "utilities") pod "dea4a6eb-c0b1-432a-81f2-e417250b0138" (UID: "dea4a6eb-c0b1-432a-81f2-e417250b0138"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.284683 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.284703 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79nw7\" (UniqueName: \"kubernetes.io/projected/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-kube-api-access-79nw7\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.284717 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.284729 4874 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.284763 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.284774 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.285908 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-kube-api-access-92txx" (OuterVolumeSpecName: "kube-api-access-92txx") pod "2d07947b-508d-4f12-ba1b-2d5f24a6db2c" (UID: "2d07947b-508d-4f12-ba1b-2d5f24a6db2c"). InnerVolumeSpecName "kube-api-access-92txx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.285784 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea4a6eb-c0b1-432a-81f2-e417250b0138-kube-api-access-w2xvq" (OuterVolumeSpecName: "kube-api-access-w2xvq") pod "dea4a6eb-c0b1-432a-81f2-e417250b0138" (UID: "dea4a6eb-c0b1-432a-81f2-e417250b0138"). InnerVolumeSpecName "kube-api-access-w2xvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.286057 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213d34f5-75cd-459c-9e56-2938fe5e3950-kube-api-access-lf6nt" (OuterVolumeSpecName: "kube-api-access-lf6nt") pod "213d34f5-75cd-459c-9e56-2938fe5e3950" (UID: "213d34f5-75cd-459c-9e56-2938fe5e3950"). InnerVolumeSpecName "kube-api-access-lf6nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.289432 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "213d34f5-75cd-459c-9e56-2938fe5e3950" (UID: "213d34f5-75cd-459c-9e56-2938fe5e3950"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.315935 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-st8lg"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.334651 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dea4a6eb-c0b1-432a-81f2-e417250b0138" (UID: "dea4a6eb-c0b1-432a-81f2-e417250b0138"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.385886 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea4a6eb-c0b1-432a-81f2-e417250b0138-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.386220 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf6nt\" (UniqueName: \"kubernetes.io/projected/213d34f5-75cd-459c-9e56-2938fe5e3950-kube-api-access-lf6nt\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.386234 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92txx\" (UniqueName: \"kubernetes.io/projected/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-kube-api-access-92txx\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.386275 4874 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/213d34f5-75cd-459c-9e56-2938fe5e3950-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.386291 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2xvq\" (UniqueName: \"kubernetes.io/projected/dea4a6eb-c0b1-432a-81f2-e417250b0138-kube-api-access-w2xvq\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.412677 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d07947b-508d-4f12-ba1b-2d5f24a6db2c" (UID: "2d07947b-508d-4f12-ba1b-2d5f24a6db2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.415906 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n2lnd"] Jan 22 11:45:50 crc kubenswrapper[4874]: W0122 11:45:50.420843 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6b1b602_57a0_4459_bb14_e96f6fa1fe9c.slice/crio-ed2fc786963a7134e138b006ccc00a8ca69ad6f8e26c223c0b234675e09772cf WatchSource:0}: Error finding container ed2fc786963a7134e138b006ccc00a8ca69ad6f8e26c223c0b234675e09772cf: Status 404 returned error can't find the container with id ed2fc786963a7134e138b006ccc00a8ca69ad6f8e26c223c0b234675e09772cf Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.451895 4874 generic.go:334] "Generic (PLEG): container finished" podID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerID="c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a" exitCode=0 Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.451983 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhb8k" event={"ID":"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4","Type":"ContainerDied","Data":"c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.452025 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xhb8k" event={"ID":"09a47d07-9bf6-4033-8c08-cc3aef9fe4f4","Type":"ContainerDied","Data":"4f0dfc138b95fd589daf732c95563b35b123c0075950db736b607653c673e3e1"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.452042 4874 scope.go:117] "RemoveContainer" containerID="c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.452216 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xhb8k" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.456476 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" event={"ID":"bb25a3d3-60b1-43ae-b007-19b20c362414","Type":"ContainerStarted","Data":"86211cce9bb8758fe37c94e0307689a34a40a3b0ea5d57981af9aeb1748168f6"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.458476 4874 generic.go:334] "Generic (PLEG): container finished" podID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerID="a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb" exitCode=0 Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.458516 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" event={"ID":"213d34f5-75cd-459c-9e56-2938fe5e3950","Type":"ContainerDied","Data":"a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.458544 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" event={"ID":"213d34f5-75cd-459c-9e56-2938fe5e3950","Type":"ContainerDied","Data":"e55fae314e5ccbaff31d4019382891789d84d2a8792b6ac6cab4bdcc20e97e1e"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.458626 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6dvvn" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.467845 4874 generic.go:334] "Generic (PLEG): container finished" podID="829e346d-eb89-4705-83c4-99d02fca8971" containerID="051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e" exitCode=0 Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.467923 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk5qd" event={"ID":"829e346d-eb89-4705-83c4-99d02fca8971","Type":"ContainerDied","Data":"051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.467950 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk5qd" event={"ID":"829e346d-eb89-4705-83c4-99d02fca8971","Type":"ContainerDied","Data":"06e653a8a66624e58a13fce521592f82bf5da1e8eff383092cdaf8dbc041daca"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.467957 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk5qd" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.478695 4874 generic.go:334] "Generic (PLEG): container finished" podID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerID="1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32" exitCode=0 Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.478766 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2mwt" event={"ID":"dea4a6eb-c0b1-432a-81f2-e417250b0138","Type":"ContainerDied","Data":"1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.478793 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2mwt" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.478806 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2mwt" event={"ID":"dea4a6eb-c0b1-432a-81f2-e417250b0138","Type":"ContainerDied","Data":"44f0f27f3f6817f2e87df4da350d3341a31946833627d7893617f4278baf12f8"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.491809 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d07947b-508d-4f12-ba1b-2d5f24a6db2c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.495332 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhb8k"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.502911 4874 scope.go:117] "RemoveContainer" containerID="0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.507191 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xhb8k"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.511680 4874 generic.go:334] "Generic (PLEG): container finished" podID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerID="d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd" exitCode=0 Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.511746 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2gqx" event={"ID":"2d07947b-508d-4f12-ba1b-2d5f24a6db2c","Type":"ContainerDied","Data":"d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.511794 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2gqx" event={"ID":"2d07947b-508d-4f12-ba1b-2d5f24a6db2c","Type":"ContainerDied","Data":"880843b24ca67a3147842f7c7ec48125fab7fce5de031516ba04e79aff10f06a"} Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.511919 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2gqx" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.564496 4874 scope.go:117] "RemoveContainer" containerID="af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.606130 4874 scope.go:117] "RemoveContainer" containerID="c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.606708 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a\": container with ID starting with c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a not found: ID does not exist" containerID="c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.606745 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a"} err="failed to get container status \"c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a\": rpc error: code = NotFound desc = could not find container \"c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a\": container with ID starting with c92c089eac1a4f555258a56bb484b8c71a6cc82dce9149ac96cfba7e4b8f946a not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.606772 4874 scope.go:117] "RemoveContainer" containerID="0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.607225 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f\": container with ID starting with 0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f not found: ID does not exist" containerID="0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.607266 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f"} err="failed to get container status \"0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f\": rpc error: code = NotFound desc = could not find container \"0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f\": container with ID starting with 0271a88b4f392b5c007a29012fdaee2d611a90d52f146f49b1b948112a16ea7f not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.607292 4874 scope.go:117] "RemoveContainer" containerID="af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.607675 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c\": container with ID starting with af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c not found: ID does not exist" containerID="af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.607703 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c"} err="failed to get container status \"af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c\": rpc error: code = NotFound desc = could not find container \"af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c\": container with ID starting with af7c8535e94f5d42ca318e78dacd324021ec14636cac5b8d6617938e7cede85c not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.607720 4874 scope.go:117] "RemoveContainer" containerID="a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.628661 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk5qd"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.629627 4874 scope.go:117] "RemoveContainer" containerID="a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.634541 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bk5qd"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.646825 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6dvvn"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.648987 4874 scope.go:117] "RemoveContainer" containerID="a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.649586 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb\": container with ID starting with a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb not found: ID does not exist" containerID="a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.649787 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb"} err="failed to get container status \"a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb\": rpc error: code = NotFound desc = could not find container \"a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb\": container with ID starting with a89cf0f40681e6ea2394a8efa5206b34afe35c8149f80b74967ca70f66ececcb not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.649878 4874 scope.go:117] "RemoveContainer" containerID="a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.650005 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6dvvn"] Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.650212 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3\": container with ID starting with a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3 not found: ID does not exist" containerID="a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.650257 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3"} err="failed to get container status \"a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3\": rpc error: code = NotFound desc = could not find container \"a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3\": container with ID starting with a3ab1ecac652d1663a1d25c20a4f94b60f4dcf52866d48a08ff55d35fc806fe3 not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.650277 4874 scope.go:117] "RemoveContainer" containerID="051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.660480 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2mwt"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.664004 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r2mwt"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.667053 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2gqx"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.669645 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s2gqx"] Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.670769 4874 scope.go:117] "RemoveContainer" containerID="231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.685699 4874 scope.go:117] "RemoveContainer" containerID="e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.698422 4874 scope.go:117] "RemoveContainer" containerID="051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.700259 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e\": container with ID starting with 051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e not found: ID does not exist" containerID="051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.700317 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e"} err="failed to get container status \"051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e\": rpc error: code = NotFound desc = could not find container \"051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e\": container with ID starting with 051c0e6931085c66795c18888f3c00c7f7391d31df635135b32e59153b20723e not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.700362 4874 scope.go:117] "RemoveContainer" containerID="231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.700981 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b\": container with ID starting with 231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b not found: ID does not exist" containerID="231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.701014 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b"} err="failed to get container status \"231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b\": rpc error: code = NotFound desc = could not find container \"231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b\": container with ID starting with 231aba9fc47370e4a6cd041c18aeb432ac088da84612e9ae338664bb8b47b53b not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.701040 4874 scope.go:117] "RemoveContainer" containerID="e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.701235 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc\": container with ID starting with e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc not found: ID does not exist" containerID="e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.701286 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc"} err="failed to get container status \"e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc\": rpc error: code = NotFound desc = could not find container \"e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc\": container with ID starting with e04166d257b1ab6ec62d2fb4997b1ee33b374447294ba53534e3f997572dbcbc not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.701303 4874 scope.go:117] "RemoveContainer" containerID="1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.713140 4874 scope.go:117] "RemoveContainer" containerID="3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.723961 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" path="/var/lib/kubelet/pods/09a47d07-9bf6-4033-8c08-cc3aef9fe4f4/volumes" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.724825 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" path="/var/lib/kubelet/pods/213d34f5-75cd-459c-9e56-2938fe5e3950/volumes" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.725385 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" path="/var/lib/kubelet/pods/2d07947b-508d-4f12-ba1b-2d5f24a6db2c/volumes" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.726668 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829e346d-eb89-4705-83c4-99d02fca8971" path="/var/lib/kubelet/pods/829e346d-eb89-4705-83c4-99d02fca8971/volumes" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.726907 4874 scope.go:117] "RemoveContainer" containerID="f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.727388 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" path="/var/lib/kubelet/pods/dea4a6eb-c0b1-432a-81f2-e417250b0138/volumes" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.737369 4874 scope.go:117] "RemoveContainer" containerID="1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.740867 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32\": container with ID starting with 1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32 not found: ID does not exist" containerID="1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.740895 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32"} err="failed to get container status \"1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32\": rpc error: code = NotFound desc = could not find container \"1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32\": container with ID starting with 1f66114e745014e9b2811f8ce4a3519c0db97b1d4c5d32326d52ab45a752ab32 not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.740915 4874 scope.go:117] "RemoveContainer" containerID="3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.741159 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a\": container with ID starting with 3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a not found: ID does not exist" containerID="3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.741178 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a"} err="failed to get container status \"3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a\": rpc error: code = NotFound desc = could not find container \"3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a\": container with ID starting with 3afe6d67a395d98d3557fc317e71f0c8a6bb5a855419905a6dab04529d3efc0a not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.741191 4874 scope.go:117] "RemoveContainer" containerID="f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.741440 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61\": container with ID starting with f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61 not found: ID does not exist" containerID="f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.741461 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61"} err="failed to get container status \"f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61\": rpc error: code = NotFound desc = could not find container \"f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61\": container with ID starting with f102fff60f0bad6a90aabfcc1da36f807499c6f5b663e355666a3b9418698f61 not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.741473 4874 scope.go:117] "RemoveContainer" containerID="d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.762862 4874 scope.go:117] "RemoveContainer" containerID="b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.777544 4874 scope.go:117] "RemoveContainer" containerID="b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.791220 4874 scope.go:117] "RemoveContainer" containerID="d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.791640 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd\": container with ID starting with d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd not found: ID does not exist" containerID="d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.791681 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd"} err="failed to get container status \"d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd\": rpc error: code = NotFound desc = could not find container \"d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd\": container with ID starting with d86e34d52dc8fcbe07403b1086c0ea0e849c1283e037e58504051c2dba7b2efd not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.791710 4874 scope.go:117] "RemoveContainer" containerID="b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.792095 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec\": container with ID starting with b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec not found: ID does not exist" containerID="b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.792125 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec"} err="failed to get container status \"b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec\": rpc error: code = NotFound desc = could not find container \"b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec\": container with ID starting with b877e60b7da09c280a2886dd4dcd5b096a9cfc5cccc418b32f7e1f7a67f9a9ec not found: ID does not exist" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.792149 4874 scope.go:117] "RemoveContainer" containerID="b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef" Jan 22 11:45:50 crc kubenswrapper[4874]: E0122 11:45:50.792442 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef\": container with ID starting with b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef not found: ID does not exist" containerID="b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef" Jan 22 11:45:50 crc kubenswrapper[4874]: I0122 11:45:50.792470 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef"} err="failed to get container status \"b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef\": rpc error: code = NotFound desc = could not find container \"b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef\": container with ID starting with b45c0a0edb45e712987d45512f5f8181affde43a34f168d3078df992db0dd1ef not found: ID does not exist" Jan 22 11:45:51 crc kubenswrapper[4874]: I0122 11:45:51.519011 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" event={"ID":"bb25a3d3-60b1-43ae-b007-19b20c362414","Type":"ContainerStarted","Data":"d71bf6ae06cdecffa61111320d32d5070c4cc6a90d9ba025a3aa011af643ed86"} Jan 22 11:45:51 crc kubenswrapper[4874]: I0122 11:45:51.519273 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:51 crc kubenswrapper[4874]: I0122 11:45:51.523176 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" Jan 22 11:45:51 crc kubenswrapper[4874]: I0122 11:45:51.528182 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" event={"ID":"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c","Type":"ContainerStarted","Data":"d1f06fcfda9d856f6781243495ca08fc9b4566fd7006eab8adb7d4c9f8023bde"} Jan 22 11:45:51 crc kubenswrapper[4874]: I0122 11:45:51.528218 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" event={"ID":"c6b1b602-57a0-4459-bb14-e96f6fa1fe9c","Type":"ContainerStarted","Data":"ed2fc786963a7134e138b006ccc00a8ca69ad6f8e26c223c0b234675e09772cf"} Jan 22 11:45:51 crc kubenswrapper[4874]: I0122 11:45:51.528280 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:45:51 crc kubenswrapper[4874]: I0122 11:45:51.539209 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-st8lg" podStartSLOduration=2.539185818 podStartE2EDuration="2.539185818s" podCreationTimestamp="2026-01-22 11:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:45:51.536170863 +0000 UTC m=+325.381241933" watchObservedRunningTime="2026-01-22 11:45:51.539185818 +0000 UTC m=+325.384256888" Jan 22 11:45:51 crc kubenswrapper[4874]: I0122 11:45:51.572071 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" podStartSLOduration=2.57205121 podStartE2EDuration="2.57205121s" podCreationTimestamp="2026-01-22 11:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:45:51.568876321 +0000 UTC m=+325.413947391" watchObservedRunningTime="2026-01-22 11:45:51.57205121 +0000 UTC m=+325.417122290" Jan 22 11:46:03 crc kubenswrapper[4874]: I0122 11:46:03.372872 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2"] Jan 22 11:46:03 crc kubenswrapper[4874]: I0122 11:46:03.373546 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" podUID="8ba4583a-60a2-4ace-ba80-8d804556d604" containerName="route-controller-manager" containerID="cri-o://21043c10d05ec41a76bd77364b386c886df31d414e1e8c2604a1659796f8c27e" gracePeriod=30 Jan 22 11:46:03 crc kubenswrapper[4874]: I0122 11:46:03.590191 4874 generic.go:334] "Generic (PLEG): container finished" podID="8ba4583a-60a2-4ace-ba80-8d804556d604" containerID="21043c10d05ec41a76bd77364b386c886df31d414e1e8c2604a1659796f8c27e" exitCode=0 Jan 22 11:46:03 crc kubenswrapper[4874]: I0122 11:46:03.590233 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" event={"ID":"8ba4583a-60a2-4ace-ba80-8d804556d604","Type":"ContainerDied","Data":"21043c10d05ec41a76bd77364b386c886df31d414e1e8c2604a1659796f8c27e"} Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.308735 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.470849 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-config\") pod \"8ba4583a-60a2-4ace-ba80-8d804556d604\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.471444 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-client-ca\") pod \"8ba4583a-60a2-4ace-ba80-8d804556d604\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.471577 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xklr5\" (UniqueName: \"kubernetes.io/projected/8ba4583a-60a2-4ace-ba80-8d804556d604-kube-api-access-xklr5\") pod \"8ba4583a-60a2-4ace-ba80-8d804556d604\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.471689 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ba4583a-60a2-4ace-ba80-8d804556d604-serving-cert\") pod \"8ba4583a-60a2-4ace-ba80-8d804556d604\" (UID: \"8ba4583a-60a2-4ace-ba80-8d804556d604\") " Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.471761 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-config" (OuterVolumeSpecName: "config") pod "8ba4583a-60a2-4ace-ba80-8d804556d604" (UID: "8ba4583a-60a2-4ace-ba80-8d804556d604"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.471923 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-client-ca" (OuterVolumeSpecName: "client-ca") pod "8ba4583a-60a2-4ace-ba80-8d804556d604" (UID: "8ba4583a-60a2-4ace-ba80-8d804556d604"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.472160 4874 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.472184 4874 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ba4583a-60a2-4ace-ba80-8d804556d604-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.476466 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba4583a-60a2-4ace-ba80-8d804556d604-kube-api-access-xklr5" (OuterVolumeSpecName: "kube-api-access-xklr5") pod "8ba4583a-60a2-4ace-ba80-8d804556d604" (UID: "8ba4583a-60a2-4ace-ba80-8d804556d604"). InnerVolumeSpecName "kube-api-access-xklr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.478466 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba4583a-60a2-4ace-ba80-8d804556d604-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8ba4583a-60a2-4ace-ba80-8d804556d604" (UID: "8ba4583a-60a2-4ace-ba80-8d804556d604"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.535988 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5s7gc"] Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536187 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829e346d-eb89-4705-83c4-99d02fca8971" containerName="extract-utilities" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536200 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="829e346d-eb89-4705-83c4-99d02fca8971" containerName="extract-utilities" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536210 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerName="extract-content" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536216 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerName="extract-content" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536225 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829e346d-eb89-4705-83c4-99d02fca8971" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536232 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="829e346d-eb89-4705-83c4-99d02fca8971" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536239 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536245 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536255 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerName="extract-content" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536260 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerName="extract-content" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536270 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerName="extract-utilities" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536275 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerName="extract-utilities" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536282 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829e346d-eb89-4705-83c4-99d02fca8971" containerName="extract-content" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536289 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="829e346d-eb89-4705-83c4-99d02fca8971" containerName="extract-content" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536296 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerName="extract-utilities" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536301 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerName="extract-utilities" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536308 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba4583a-60a2-4ace-ba80-8d804556d604" containerName="route-controller-manager" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536314 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba4583a-60a2-4ace-ba80-8d804556d604" containerName="route-controller-manager" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536323 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerName="marketplace-operator" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536328 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerName="marketplace-operator" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536335 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerName="extract-content" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536342 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerName="extract-content" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536353 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536359 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536366 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536373 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536381 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerName="extract-utilities" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536387 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerName="extract-utilities" Jan 22 11:46:04 crc kubenswrapper[4874]: E0122 11:46:04.536423 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerName="marketplace-operator" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536430 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerName="marketplace-operator" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536525 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerName="marketplace-operator" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536536 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="213d34f5-75cd-459c-9e56-2938fe5e3950" containerName="marketplace-operator" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536546 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="829e346d-eb89-4705-83c4-99d02fca8971" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536553 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea4a6eb-c0b1-432a-81f2-e417250b0138" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536560 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d07947b-508d-4f12-ba1b-2d5f24a6db2c" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536567 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ba4583a-60a2-4ace-ba80-8d804556d604" containerName="route-controller-manager" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.536574 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a47d07-9bf6-4033-8c08-cc3aef9fe4f4" containerName="registry-server" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.537198 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.540467 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.546260 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s7gc"] Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.573641 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xklr5\" (UniqueName: \"kubernetes.io/projected/8ba4583a-60a2-4ace-ba80-8d804556d604-kube-api-access-xklr5\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.573680 4874 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ba4583a-60a2-4ace-ba80-8d804556d604-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.596155 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" event={"ID":"8ba4583a-60a2-4ace-ba80-8d804556d604","Type":"ContainerDied","Data":"ef570dea7a83f0ce123b85b79bfbc070103a700770f9817e620b420f82ef7f8a"} Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.596198 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.596207 4874 scope.go:117] "RemoveContainer" containerID="21043c10d05ec41a76bd77364b386c886df31d414e1e8c2604a1659796f8c27e" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.622327 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2"] Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.625901 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5764494d47-vsfq2"] Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.674830 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-catalog-content\") pod \"redhat-marketplace-5s7gc\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.674900 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-utilities\") pod \"redhat-marketplace-5s7gc\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.674928 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slv5g\" (UniqueName: \"kubernetes.io/projected/37153c5d-6533-4973-99bd-f682b3c148d9-kube-api-access-slv5g\") pod \"redhat-marketplace-5s7gc\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.725965 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ba4583a-60a2-4ace-ba80-8d804556d604" path="/var/lib/kubelet/pods/8ba4583a-60a2-4ace-ba80-8d804556d604/volumes" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.755219 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n"] Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.755981 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.757734 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.758002 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.759202 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.759454 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.759616 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.759643 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.764277 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n"] Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.775945 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-catalog-content\") pod \"redhat-marketplace-5s7gc\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.776020 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-utilities\") pod \"redhat-marketplace-5s7gc\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.776047 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slv5g\" (UniqueName: \"kubernetes.io/projected/37153c5d-6533-4973-99bd-f682b3c148d9-kube-api-access-slv5g\") pod \"redhat-marketplace-5s7gc\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.776644 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-catalog-content\") pod \"redhat-marketplace-5s7gc\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.776815 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-utilities\") pod \"redhat-marketplace-5s7gc\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.791436 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slv5g\" (UniqueName: \"kubernetes.io/projected/37153c5d-6533-4973-99bd-f682b3c148d9-kube-api-access-slv5g\") pod \"redhat-marketplace-5s7gc\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.861048 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.877971 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-config\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.878354 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28jn8\" (UniqueName: \"kubernetes.io/projected/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-kube-api-access-28jn8\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.878752 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-client-ca\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.878785 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-serving-cert\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.979696 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-config\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.980065 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28jn8\" (UniqueName: \"kubernetes.io/projected/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-kube-api-access-28jn8\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.980140 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-client-ca\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.980159 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-serving-cert\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.982410 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-client-ca\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.986587 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-serving-cert\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.990761 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-config\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:04 crc kubenswrapper[4874]: I0122 11:46:04.997317 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28jn8\" (UniqueName: \"kubernetes.io/projected/59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e-kube-api-access-28jn8\") pod \"route-controller-manager-5b8588574d-d6k8n\" (UID: \"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e\") " pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.072730 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.095989 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s7gc"] Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.142715 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7qhz5"] Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.143843 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.152068 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.153863 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qhz5"] Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.283243 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0684b3da-a3b2-496b-be2e-b8e0fe8e1277-catalog-content\") pod \"redhat-operators-7qhz5\" (UID: \"0684b3da-a3b2-496b-be2e-b8e0fe8e1277\") " pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.283771 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2plf4\" (UniqueName: \"kubernetes.io/projected/0684b3da-a3b2-496b-be2e-b8e0fe8e1277-kube-api-access-2plf4\") pod \"redhat-operators-7qhz5\" (UID: \"0684b3da-a3b2-496b-be2e-b8e0fe8e1277\") " pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.283811 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0684b3da-a3b2-496b-be2e-b8e0fe8e1277-utilities\") pod \"redhat-operators-7qhz5\" (UID: \"0684b3da-a3b2-496b-be2e-b8e0fe8e1277\") " pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.384857 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2plf4\" (UniqueName: \"kubernetes.io/projected/0684b3da-a3b2-496b-be2e-b8e0fe8e1277-kube-api-access-2plf4\") pod \"redhat-operators-7qhz5\" (UID: \"0684b3da-a3b2-496b-be2e-b8e0fe8e1277\") " pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.384905 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0684b3da-a3b2-496b-be2e-b8e0fe8e1277-utilities\") pod \"redhat-operators-7qhz5\" (UID: \"0684b3da-a3b2-496b-be2e-b8e0fe8e1277\") " pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.384964 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0684b3da-a3b2-496b-be2e-b8e0fe8e1277-catalog-content\") pod \"redhat-operators-7qhz5\" (UID: \"0684b3da-a3b2-496b-be2e-b8e0fe8e1277\") " pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.385601 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0684b3da-a3b2-496b-be2e-b8e0fe8e1277-utilities\") pod \"redhat-operators-7qhz5\" (UID: \"0684b3da-a3b2-496b-be2e-b8e0fe8e1277\") " pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.385606 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0684b3da-a3b2-496b-be2e-b8e0fe8e1277-catalog-content\") pod \"redhat-operators-7qhz5\" (UID: \"0684b3da-a3b2-496b-be2e-b8e0fe8e1277\") " pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.403948 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2plf4\" (UniqueName: \"kubernetes.io/projected/0684b3da-a3b2-496b-be2e-b8e0fe8e1277-kube-api-access-2plf4\") pod \"redhat-operators-7qhz5\" (UID: \"0684b3da-a3b2-496b-be2e-b8e0fe8e1277\") " pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.470503 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n"] Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.479917 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.608335 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" event={"ID":"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e","Type":"ContainerStarted","Data":"bd1fd63555b8a029a92e0f40c158d063833d3e6e437ab5d0bf5c04496ca7930c"} Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.609993 4874 generic.go:334] "Generic (PLEG): container finished" podID="37153c5d-6533-4973-99bd-f682b3c148d9" containerID="684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f" exitCode=0 Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.610054 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s7gc" event={"ID":"37153c5d-6533-4973-99bd-f682b3c148d9","Type":"ContainerDied","Data":"684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f"} Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.610076 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s7gc" event={"ID":"37153c5d-6533-4973-99bd-f682b3c148d9","Type":"ContainerStarted","Data":"3610f727eca8b59c0ef9ba7a4755e8243ae3a7e6694ae4e499109a32fd98ae07"} Jan 22 11:46:05 crc kubenswrapper[4874]: I0122 11:46:05.895778 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qhz5"] Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.623284 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s7gc" event={"ID":"37153c5d-6533-4973-99bd-f682b3c148d9","Type":"ContainerStarted","Data":"c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb"} Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.625032 4874 generic.go:334] "Generic (PLEG): container finished" podID="0684b3da-a3b2-496b-be2e-b8e0fe8e1277" containerID="3e075910332bf00ab03e82065b1f43ae1fdcf6117854e80acd8b69026db05443" exitCode=0 Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.625136 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qhz5" event={"ID":"0684b3da-a3b2-496b-be2e-b8e0fe8e1277","Type":"ContainerDied","Data":"3e075910332bf00ab03e82065b1f43ae1fdcf6117854e80acd8b69026db05443"} Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.625185 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qhz5" event={"ID":"0684b3da-a3b2-496b-be2e-b8e0fe8e1277","Type":"ContainerStarted","Data":"2a7a120c40ba3e5721b378f27bf0a65a238c9b96f9dc2ab6b7f3892cad117222"} Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.626369 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" event={"ID":"59b05a6e-2fa5-4bfa-b7c2-47f43f7d681e","Type":"ContainerStarted","Data":"c49a07ce825a7e06fa4caecc1f69fac8b2307a3ebcf1bbe7b914ac17f5c6dee2"} Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.626587 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.633274 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.688326 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b8588574d-d6k8n" podStartSLOduration=3.688300619 podStartE2EDuration="3.688300619s" podCreationTimestamp="2026-01-22 11:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:46:06.681789663 +0000 UTC m=+340.526860733" watchObservedRunningTime="2026-01-22 11:46:06.688300619 +0000 UTC m=+340.533371689" Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.934271 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g952h"] Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.936048 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.937797 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 11:46:06 crc kubenswrapper[4874]: I0122 11:46:06.945816 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g952h"] Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.107679 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m2vz\" (UniqueName: \"kubernetes.io/projected/1b32604b-2210-43c4-9b45-a833b8bdff64-kube-api-access-4m2vz\") pod \"community-operators-g952h\" (UID: \"1b32604b-2210-43c4-9b45-a833b8bdff64\") " pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.107759 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b32604b-2210-43c4-9b45-a833b8bdff64-utilities\") pod \"community-operators-g952h\" (UID: \"1b32604b-2210-43c4-9b45-a833b8bdff64\") " pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.107848 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b32604b-2210-43c4-9b45-a833b8bdff64-catalog-content\") pod \"community-operators-g952h\" (UID: \"1b32604b-2210-43c4-9b45-a833b8bdff64\") " pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.209152 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b32604b-2210-43c4-9b45-a833b8bdff64-catalog-content\") pod \"community-operators-g952h\" (UID: \"1b32604b-2210-43c4-9b45-a833b8bdff64\") " pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.209297 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m2vz\" (UniqueName: \"kubernetes.io/projected/1b32604b-2210-43c4-9b45-a833b8bdff64-kube-api-access-4m2vz\") pod \"community-operators-g952h\" (UID: \"1b32604b-2210-43c4-9b45-a833b8bdff64\") " pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.209447 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b32604b-2210-43c4-9b45-a833b8bdff64-utilities\") pod \"community-operators-g952h\" (UID: \"1b32604b-2210-43c4-9b45-a833b8bdff64\") " pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.210900 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b32604b-2210-43c4-9b45-a833b8bdff64-utilities\") pod \"community-operators-g952h\" (UID: \"1b32604b-2210-43c4-9b45-a833b8bdff64\") " pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.210964 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b32604b-2210-43c4-9b45-a833b8bdff64-catalog-content\") pod \"community-operators-g952h\" (UID: \"1b32604b-2210-43c4-9b45-a833b8bdff64\") " pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.226913 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m2vz\" (UniqueName: \"kubernetes.io/projected/1b32604b-2210-43c4-9b45-a833b8bdff64-kube-api-access-4m2vz\") pod \"community-operators-g952h\" (UID: \"1b32604b-2210-43c4-9b45-a833b8bdff64\") " pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.260461 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.537376 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bbdq6"] Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.538597 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.540893 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.556025 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbdq6"] Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.614443 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlqxz\" (UniqueName: \"kubernetes.io/projected/b0a069d7-0600-4490-82a9-2656913f35b7-kube-api-access-jlqxz\") pod \"certified-operators-bbdq6\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.617485 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-utilities\") pod \"certified-operators-bbdq6\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.617875 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-catalog-content\") pod \"certified-operators-bbdq6\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.635805 4874 generic.go:334] "Generic (PLEG): container finished" podID="37153c5d-6533-4973-99bd-f682b3c148d9" containerID="c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb" exitCode=0 Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.636032 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s7gc" event={"ID":"37153c5d-6533-4973-99bd-f682b3c148d9","Type":"ContainerDied","Data":"c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb"} Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.666345 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g952h"] Jan 22 11:46:07 crc kubenswrapper[4874]: W0122 11:46:07.680162 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b32604b_2210_43c4_9b45_a833b8bdff64.slice/crio-9b914680fd3cd46302939b75768100737185d6b054310552b00b181c4e42a281 WatchSource:0}: Error finding container 9b914680fd3cd46302939b75768100737185d6b054310552b00b181c4e42a281: Status 404 returned error can't find the container with id 9b914680fd3cd46302939b75768100737185d6b054310552b00b181c4e42a281 Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.719054 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-utilities\") pod \"certified-operators-bbdq6\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.719163 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-catalog-content\") pod \"certified-operators-bbdq6\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.719200 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlqxz\" (UniqueName: \"kubernetes.io/projected/b0a069d7-0600-4490-82a9-2656913f35b7-kube-api-access-jlqxz\") pod \"certified-operators-bbdq6\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.719595 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-utilities\") pod \"certified-operators-bbdq6\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.719845 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-catalog-content\") pod \"certified-operators-bbdq6\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.737452 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlqxz\" (UniqueName: \"kubernetes.io/projected/b0a069d7-0600-4490-82a9-2656913f35b7-kube-api-access-jlqxz\") pod \"certified-operators-bbdq6\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:07 crc kubenswrapper[4874]: I0122 11:46:07.891573 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.273186 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbdq6"] Jan 22 11:46:08 crc kubenswrapper[4874]: W0122 11:46:08.279874 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0a069d7_0600_4490_82a9_2656913f35b7.slice/crio-77b7f983a0caf174b2a1ff9585d546c476923723f895d8d2ccf6e1a80d72b6c1 WatchSource:0}: Error finding container 77b7f983a0caf174b2a1ff9585d546c476923723f895d8d2ccf6e1a80d72b6c1: Status 404 returned error can't find the container with id 77b7f983a0caf174b2a1ff9585d546c476923723f895d8d2ccf6e1a80d72b6c1 Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.643574 4874 generic.go:334] "Generic (PLEG): container finished" podID="1b32604b-2210-43c4-9b45-a833b8bdff64" containerID="5cdf975ced7eb78999c4d49ef3319fe4454449901745d48f99503240c68cab9a" exitCode=0 Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.643639 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g952h" event={"ID":"1b32604b-2210-43c4-9b45-a833b8bdff64","Type":"ContainerDied","Data":"5cdf975ced7eb78999c4d49ef3319fe4454449901745d48f99503240c68cab9a"} Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.643878 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g952h" event={"ID":"1b32604b-2210-43c4-9b45-a833b8bdff64","Type":"ContainerStarted","Data":"9b914680fd3cd46302939b75768100737185d6b054310552b00b181c4e42a281"} Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.646027 4874 generic.go:334] "Generic (PLEG): container finished" podID="b0a069d7-0600-4490-82a9-2656913f35b7" containerID="2136577bcdcbdcffa2ef9ace83cb7011df13fa4d06b362c2ac9b0ca6f8650b12" exitCode=0 Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.646088 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbdq6" event={"ID":"b0a069d7-0600-4490-82a9-2656913f35b7","Type":"ContainerDied","Data":"2136577bcdcbdcffa2ef9ace83cb7011df13fa4d06b362c2ac9b0ca6f8650b12"} Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.646106 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbdq6" event={"ID":"b0a069d7-0600-4490-82a9-2656913f35b7","Type":"ContainerStarted","Data":"77b7f983a0caf174b2a1ff9585d546c476923723f895d8d2ccf6e1a80d72b6c1"} Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.649579 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s7gc" event={"ID":"37153c5d-6533-4973-99bd-f682b3c148d9","Type":"ContainerStarted","Data":"54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a"} Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.653165 4874 generic.go:334] "Generic (PLEG): container finished" podID="0684b3da-a3b2-496b-be2e-b8e0fe8e1277" containerID="b9b2382b73113a44e92fe83413023a7ead8e974ad1dea0f23af30036a056af60" exitCode=0 Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.653234 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qhz5" event={"ID":"0684b3da-a3b2-496b-be2e-b8e0fe8e1277","Type":"ContainerDied","Data":"b9b2382b73113a44e92fe83413023a7ead8e974ad1dea0f23af30036a056af60"} Jan 22 11:46:08 crc kubenswrapper[4874]: I0122 11:46:08.718168 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5s7gc" podStartSLOduration=2.214117203 podStartE2EDuration="4.718154635s" podCreationTimestamp="2026-01-22 11:46:04 +0000 UTC" firstStartedPulling="2026-01-22 11:46:05.611839725 +0000 UTC m=+339.456910795" lastFinishedPulling="2026-01-22 11:46:08.115877157 +0000 UTC m=+341.960948227" observedRunningTime="2026-01-22 11:46:08.716291386 +0000 UTC m=+342.561362496" watchObservedRunningTime="2026-01-22 11:46:08.718154635 +0000 UTC m=+342.563225705" Jan 22 11:46:09 crc kubenswrapper[4874]: I0122 11:46:09.659707 4874 generic.go:334] "Generic (PLEG): container finished" podID="b0a069d7-0600-4490-82a9-2656913f35b7" containerID="1ca6b3f9986d9290ed89e908e86e7d2e92aa24a906749df6d2ac85d44a6ba2a1" exitCode=0 Jan 22 11:46:09 crc kubenswrapper[4874]: I0122 11:46:09.659777 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbdq6" event={"ID":"b0a069d7-0600-4490-82a9-2656913f35b7","Type":"ContainerDied","Data":"1ca6b3f9986d9290ed89e908e86e7d2e92aa24a906749df6d2ac85d44a6ba2a1"} Jan 22 11:46:09 crc kubenswrapper[4874]: I0122 11:46:09.671421 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qhz5" event={"ID":"0684b3da-a3b2-496b-be2e-b8e0fe8e1277","Type":"ContainerStarted","Data":"57f11e14580c8438fa69024b2c6e29e0e1d4e35056bf6b184f9ec8980bae6bb9"} Jan 22 11:46:09 crc kubenswrapper[4874]: I0122 11:46:09.674080 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g952h" event={"ID":"1b32604b-2210-43c4-9b45-a833b8bdff64","Type":"ContainerStarted","Data":"5b17401120283f26aec712f87e671ae70042045fd1b29eda8aa624834e6d83ad"} Jan 22 11:46:09 crc kubenswrapper[4874]: I0122 11:46:09.733917 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7qhz5" podStartSLOduration=2.311162537 podStartE2EDuration="4.733899578s" podCreationTimestamp="2026-01-22 11:46:05 +0000 UTC" firstStartedPulling="2026-01-22 11:46:06.625999599 +0000 UTC m=+340.471070669" lastFinishedPulling="2026-01-22 11:46:09.04873664 +0000 UTC m=+342.893807710" observedRunningTime="2026-01-22 11:46:09.728702324 +0000 UTC m=+343.573773414" watchObservedRunningTime="2026-01-22 11:46:09.733899578 +0000 UTC m=+343.578970668" Jan 22 11:46:09 crc kubenswrapper[4874]: I0122 11:46:09.832476 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-n2lnd" Jan 22 11:46:09 crc kubenswrapper[4874]: I0122 11:46:09.940457 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jg4wj"] Jan 22 11:46:10 crc kubenswrapper[4874]: I0122 11:46:10.679354 4874 generic.go:334] "Generic (PLEG): container finished" podID="1b32604b-2210-43c4-9b45-a833b8bdff64" containerID="5b17401120283f26aec712f87e671ae70042045fd1b29eda8aa624834e6d83ad" exitCode=0 Jan 22 11:46:10 crc kubenswrapper[4874]: I0122 11:46:10.679453 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g952h" event={"ID":"1b32604b-2210-43c4-9b45-a833b8bdff64","Type":"ContainerDied","Data":"5b17401120283f26aec712f87e671ae70042045fd1b29eda8aa624834e6d83ad"} Jan 22 11:46:10 crc kubenswrapper[4874]: I0122 11:46:10.682293 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbdq6" event={"ID":"b0a069d7-0600-4490-82a9-2656913f35b7","Type":"ContainerStarted","Data":"56ec71b649678b9d06d64da2f108ecc5d7baa2ab0550b12655288c4bbfbcdef7"} Jan 22 11:46:10 crc kubenswrapper[4874]: I0122 11:46:10.716735 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bbdq6" podStartSLOduration=2.244238633 podStartE2EDuration="3.716716741s" podCreationTimestamp="2026-01-22 11:46:07 +0000 UTC" firstStartedPulling="2026-01-22 11:46:08.64748192 +0000 UTC m=+342.492552990" lastFinishedPulling="2026-01-22 11:46:10.119960028 +0000 UTC m=+343.965031098" observedRunningTime="2026-01-22 11:46:10.714410068 +0000 UTC m=+344.559481148" watchObservedRunningTime="2026-01-22 11:46:10.716716741 +0000 UTC m=+344.561787811" Jan 22 11:46:11 crc kubenswrapper[4874]: I0122 11:46:11.691528 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g952h" event={"ID":"1b32604b-2210-43c4-9b45-a833b8bdff64","Type":"ContainerStarted","Data":"fc909630936b38a104b45954668e2ed68928a36f5e77acab9c59012caae6c2e0"} Jan 22 11:46:13 crc kubenswrapper[4874]: I0122 11:46:13.520949 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:46:13 crc kubenswrapper[4874]: I0122 11:46:13.521011 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:46:14 crc kubenswrapper[4874]: I0122 11:46:14.861387 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:14 crc kubenswrapper[4874]: I0122 11:46:14.861753 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:14 crc kubenswrapper[4874]: I0122 11:46:14.912180 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:14 crc kubenswrapper[4874]: I0122 11:46:14.928347 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g952h" podStartSLOduration=6.395079881 podStartE2EDuration="8.928325416s" podCreationTimestamp="2026-01-22 11:46:06 +0000 UTC" firstStartedPulling="2026-01-22 11:46:08.645137936 +0000 UTC m=+342.490209006" lastFinishedPulling="2026-01-22 11:46:11.178383471 +0000 UTC m=+345.023454541" observedRunningTime="2026-01-22 11:46:11.709976313 +0000 UTC m=+345.555047383" watchObservedRunningTime="2026-01-22 11:46:14.928325416 +0000 UTC m=+348.773396486" Jan 22 11:46:15 crc kubenswrapper[4874]: I0122 11:46:15.481095 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:15 crc kubenswrapper[4874]: I0122 11:46:15.481495 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:15 crc kubenswrapper[4874]: I0122 11:46:15.516481 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:15 crc kubenswrapper[4874]: I0122 11:46:15.749054 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:46:15 crc kubenswrapper[4874]: I0122 11:46:15.758638 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7qhz5" Jan 22 11:46:17 crc kubenswrapper[4874]: I0122 11:46:17.261096 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:17 crc kubenswrapper[4874]: I0122 11:46:17.261179 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:17 crc kubenswrapper[4874]: I0122 11:46:17.332391 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:17 crc kubenswrapper[4874]: I0122 11:46:17.781105 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g952h" Jan 22 11:46:17 crc kubenswrapper[4874]: I0122 11:46:17.892116 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:17 crc kubenswrapper[4874]: I0122 11:46:17.892179 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:17 crc kubenswrapper[4874]: I0122 11:46:17.941246 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:18 crc kubenswrapper[4874]: I0122 11:46:18.804489 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:46:34 crc kubenswrapper[4874]: I0122 11:46:34.980012 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" podUID="90a037cd-cacf-4706-a857-f65c8f16c384" containerName="registry" containerID="cri-o://e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe" gracePeriod=30 Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.420557 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.506265 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90a037cd-cacf-4706-a857-f65c8f16c384-ca-trust-extracted\") pod \"90a037cd-cacf-4706-a857-f65c8f16c384\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.506418 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90a037cd-cacf-4706-a857-f65c8f16c384-installation-pull-secrets\") pod \"90a037cd-cacf-4706-a857-f65c8f16c384\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.506464 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-registry-tls\") pod \"90a037cd-cacf-4706-a857-f65c8f16c384\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.506504 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-bound-sa-token\") pod \"90a037cd-cacf-4706-a857-f65c8f16c384\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.506526 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-registry-certificates\") pod \"90a037cd-cacf-4706-a857-f65c8f16c384\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.506541 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwtlg\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-kube-api-access-zwtlg\") pod \"90a037cd-cacf-4706-a857-f65c8f16c384\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.506717 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"90a037cd-cacf-4706-a857-f65c8f16c384\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.506745 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-trusted-ca\") pod \"90a037cd-cacf-4706-a857-f65c8f16c384\" (UID: \"90a037cd-cacf-4706-a857-f65c8f16c384\") " Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.507609 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "90a037cd-cacf-4706-a857-f65c8f16c384" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.508604 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "90a037cd-cacf-4706-a857-f65c8f16c384" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.513697 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "90a037cd-cacf-4706-a857-f65c8f16c384" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.514086 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-kube-api-access-zwtlg" (OuterVolumeSpecName: "kube-api-access-zwtlg") pod "90a037cd-cacf-4706-a857-f65c8f16c384" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384"). InnerVolumeSpecName "kube-api-access-zwtlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.514703 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a037cd-cacf-4706-a857-f65c8f16c384-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "90a037cd-cacf-4706-a857-f65c8f16c384" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.515377 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "90a037cd-cacf-4706-a857-f65c8f16c384" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.526300 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "90a037cd-cacf-4706-a857-f65c8f16c384" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.535469 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90a037cd-cacf-4706-a857-f65c8f16c384-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "90a037cd-cacf-4706-a857-f65c8f16c384" (UID: "90a037cd-cacf-4706-a857-f65c8f16c384"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.608670 4874 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.608704 4874 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.608715 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwtlg\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-kube-api-access-zwtlg\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.608724 4874 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90a037cd-cacf-4706-a857-f65c8f16c384-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.608733 4874 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90a037cd-cacf-4706-a857-f65c8f16c384-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.608743 4874 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90a037cd-cacf-4706-a857-f65c8f16c384-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.608752 4874 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90a037cd-cacf-4706-a857-f65c8f16c384-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.828302 4874 generic.go:334] "Generic (PLEG): container finished" podID="90a037cd-cacf-4706-a857-f65c8f16c384" containerID="e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe" exitCode=0 Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.828381 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.828378 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" event={"ID":"90a037cd-cacf-4706-a857-f65c8f16c384","Type":"ContainerDied","Data":"e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe"} Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.828475 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jg4wj" event={"ID":"90a037cd-cacf-4706-a857-f65c8f16c384","Type":"ContainerDied","Data":"61abb1ca8d94b8a93a2470a92da41ddacd13e2ac8b6f3b50b461fb0f48f44b61"} Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.828495 4874 scope.go:117] "RemoveContainer" containerID="e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.852822 4874 scope.go:117] "RemoveContainer" containerID="e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe" Jan 22 11:46:35 crc kubenswrapper[4874]: E0122 11:46:35.853340 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe\": container with ID starting with e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe not found: ID does not exist" containerID="e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.853370 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe"} err="failed to get container status \"e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe\": rpc error: code = NotFound desc = could not find container \"e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe\": container with ID starting with e38ad0098b8ea8cfa9bacebe690cc8d3c60a195c4c2554e6c860332c9db56abe not found: ID does not exist" Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.890493 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jg4wj"] Jan 22 11:46:35 crc kubenswrapper[4874]: I0122 11:46:35.898148 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jg4wj"] Jan 22 11:46:36 crc kubenswrapper[4874]: I0122 11:46:36.722931 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a037cd-cacf-4706-a857-f65c8f16c384" path="/var/lib/kubelet/pods/90a037cd-cacf-4706-a857-f65c8f16c384/volumes" Jan 22 11:46:43 crc kubenswrapper[4874]: I0122 11:46:43.520095 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:46:43 crc kubenswrapper[4874]: I0122 11:46:43.520849 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:47:13 crc kubenswrapper[4874]: I0122 11:47:13.520332 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:47:13 crc kubenswrapper[4874]: I0122 11:47:13.522802 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:47:13 crc kubenswrapper[4874]: I0122 11:47:13.523038 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:47:13 crc kubenswrapper[4874]: I0122 11:47:13.524066 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf675adc04e058930041f77cfb016f23f15475800ec1dca3cd6db1579e71257b"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:47:13 crc kubenswrapper[4874]: I0122 11:47:13.524523 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://cf675adc04e058930041f77cfb016f23f15475800ec1dca3cd6db1579e71257b" gracePeriod=600 Jan 22 11:47:14 crc kubenswrapper[4874]: I0122 11:47:14.087648 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="cf675adc04e058930041f77cfb016f23f15475800ec1dca3cd6db1579e71257b" exitCode=0 Jan 22 11:47:14 crc kubenswrapper[4874]: I0122 11:47:14.088009 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"cf675adc04e058930041f77cfb016f23f15475800ec1dca3cd6db1579e71257b"} Jan 22 11:47:14 crc kubenswrapper[4874]: I0122 11:47:14.088040 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"4ef8bf2a0eb4af528f4de7cda955aa92b0de433bdf6d0ebda36103e1834fe2b2"} Jan 22 11:47:14 crc kubenswrapper[4874]: I0122 11:47:14.088059 4874 scope.go:117] "RemoveContainer" containerID="e71ba793b9f7a76acd2521c1621a1ae38890f23c9caaa3d033e416369701b192" Jan 22 11:49:13 crc kubenswrapper[4874]: I0122 11:49:13.520259 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:49:13 crc kubenswrapper[4874]: I0122 11:49:13.520747 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:49:43 crc kubenswrapper[4874]: I0122 11:49:43.520507 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:49:43 crc kubenswrapper[4874]: I0122 11:49:43.520992 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:50:13 crc kubenswrapper[4874]: I0122 11:50:13.520779 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:50:13 crc kubenswrapper[4874]: I0122 11:50:13.522607 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:50:13 crc kubenswrapper[4874]: I0122 11:50:13.522693 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:50:13 crc kubenswrapper[4874]: I0122 11:50:13.523345 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ef8bf2a0eb4af528f4de7cda955aa92b0de433bdf6d0ebda36103e1834fe2b2"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:50:13 crc kubenswrapper[4874]: I0122 11:50:13.523437 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://4ef8bf2a0eb4af528f4de7cda955aa92b0de433bdf6d0ebda36103e1834fe2b2" gracePeriod=600 Jan 22 11:50:14 crc kubenswrapper[4874]: I0122 11:50:14.282086 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="4ef8bf2a0eb4af528f4de7cda955aa92b0de433bdf6d0ebda36103e1834fe2b2" exitCode=0 Jan 22 11:50:14 crc kubenswrapper[4874]: I0122 11:50:14.282141 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"4ef8bf2a0eb4af528f4de7cda955aa92b0de433bdf6d0ebda36103e1834fe2b2"} Jan 22 11:50:14 crc kubenswrapper[4874]: I0122 11:50:14.282691 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"58a69e8f9170bdd4dd90e6e773cd03089d2a6279398d2b1a2ba4ed87135be13a"} Jan 22 11:50:14 crc kubenswrapper[4874]: I0122 11:50:14.282719 4874 scope.go:117] "RemoveContainer" containerID="cf675adc04e058930041f77cfb016f23f15475800ec1dca3cd6db1579e71257b" Jan 22 11:51:05 crc kubenswrapper[4874]: I0122 11:51:05.774178 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tmll"] Jan 22 11:51:05 crc kubenswrapper[4874]: I0122 11:51:05.775083 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovn-controller" containerID="cri-o://3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794" gracePeriod=30 Jan 22 11:51:05 crc kubenswrapper[4874]: I0122 11:51:05.775499 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="sbdb" containerID="cri-o://c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de" gracePeriod=30 Jan 22 11:51:05 crc kubenswrapper[4874]: I0122 11:51:05.775539 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="nbdb" containerID="cri-o://eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079" gracePeriod=30 Jan 22 11:51:05 crc kubenswrapper[4874]: I0122 11:51:05.775572 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="northd" containerID="cri-o://6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6" gracePeriod=30 Jan 22 11:51:05 crc kubenswrapper[4874]: I0122 11:51:05.775598 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2" gracePeriod=30 Jan 22 11:51:05 crc kubenswrapper[4874]: I0122 11:51:05.775624 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kube-rbac-proxy-node" containerID="cri-o://32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348" gracePeriod=30 Jan 22 11:51:05 crc kubenswrapper[4874]: I0122 11:51:05.775654 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovn-acl-logging" containerID="cri-o://af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba" gracePeriod=30 Jan 22 11:51:05 crc kubenswrapper[4874]: I0122 11:51:05.825235 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" containerID="cri-o://a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161" gracePeriod=30 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.113356 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/3.log" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.116304 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovn-acl-logging/0.log" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.116895 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovn-controller/0.log" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.117320 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.185948 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lbtk2"] Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186225 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovn-acl-logging" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186257 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovn-acl-logging" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186277 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kube-rbac-proxy-node" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186289 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kube-rbac-proxy-node" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186306 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="sbdb" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186316 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="sbdb" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186330 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186340 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186353 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186365 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186381 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovn-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186392 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovn-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186437 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186449 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186463 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186472 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186485 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kubecfg-setup" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186495 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kubecfg-setup" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186515 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="nbdb" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186525 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="nbdb" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186542 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="northd" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186553 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="northd" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.186585 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a037cd-cacf-4706-a857-f65c8f16c384" containerName="registry" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186596 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a037cd-cacf-4706-a857-f65c8f16c384" containerName="registry" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186770 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovn-acl-logging" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186784 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186796 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186806 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="nbdb" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186823 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="northd" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186839 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a037cd-cacf-4706-a857-f65c8f16c384" containerName="registry" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186854 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="sbdb" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186868 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186882 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186894 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovn-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.186906 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="kube-rbac-proxy-node" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.187065 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.187081 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.187264 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.187449 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.187468 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.187605 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerName="ovnkube-controller" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.189689 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.277778 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9lv4\" (UniqueName: \"kubernetes.io/projected/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-kube-api-access-t9lv4\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.277849 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-netns\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.277922 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-ovn\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.277960 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-openvswitch\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.277995 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-netd\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278035 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-bin\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278066 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-systemd-units\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278100 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-var-lib-openvswitch\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278133 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-etc-openvswitch\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278126 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278155 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278177 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-env-overrides\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278232 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278266 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-config\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278248 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278281 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278292 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278301 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovn-node-metrics-cert\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278274 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278350 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278508 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-script-lib\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278582 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-node-log\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278668 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-ovn-kubernetes\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278692 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-node-log" (OuterVolumeSpecName: "node-log") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278694 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278727 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278739 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278781 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-log-socket\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278791 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278789 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278861 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-log-socket" (OuterVolumeSpecName: "log-socket") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278825 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-systemd\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278924 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-kubelet\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.278965 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-slash\") pod \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\" (UID: \"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9\") " Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279010 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279119 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-slash" (OuterVolumeSpecName: "host-slash") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279242 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-run-openvswitch\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279313 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-run-systemd\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279313 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279352 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-cni-bin\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279384 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-node-log\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279476 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-slash\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279521 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5937262c-c601-4bdc-9e86-06900a448320-ovnkube-config\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279554 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5937262c-c601-4bdc-9e86-06900a448320-ovn-node-metrics-cert\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279586 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-run-ovn-kubernetes\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279636 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5937262c-c601-4bdc-9e86-06900a448320-env-overrides\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279671 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5937262c-c601-4bdc-9e86-06900a448320-ovnkube-script-lib\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279712 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-var-lib-openvswitch\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279749 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-run-ovn\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279823 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-cni-netd\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279857 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279899 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-kubelet\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.279935 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vl7\" (UniqueName: \"kubernetes.io/projected/5937262c-c601-4bdc-9e86-06900a448320-kube-api-access-n2vl7\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280050 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-log-socket\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280103 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-etc-openvswitch\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280193 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-systemd-units\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280251 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-run-netns\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280345 4874 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280369 4874 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280390 4874 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280438 4874 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280458 4874 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280476 4874 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280494 4874 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280512 4874 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280528 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280545 4874 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280566 4874 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280583 4874 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280603 4874 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280620 4874 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280638 4874 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280654 4874 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.280671 4874 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.284911 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-kube-api-access-t9lv4" (OuterVolumeSpecName: "kube-api-access-t9lv4") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "kube-api-access-t9lv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.285822 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.294164 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" (UID: "642d0ca0-2e0f-4b69-9484-a63d0a01f8a9"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.381931 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-run-openvswitch\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382041 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-run-systemd\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382086 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-cni-bin\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382133 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-node-log\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382189 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-slash\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382240 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5937262c-c601-4bdc-9e86-06900a448320-ovnkube-config\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382283 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5937262c-c601-4bdc-9e86-06900a448320-ovn-node-metrics-cert\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382331 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-run-ovn-kubernetes\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382374 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5937262c-c601-4bdc-9e86-06900a448320-env-overrides\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382461 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5937262c-c601-4bdc-9e86-06900a448320-ovnkube-script-lib\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382514 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-var-lib-openvswitch\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382561 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-run-ovn\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382638 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-cni-netd\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382721 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382782 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-kubelet\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382824 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vl7\" (UniqueName: \"kubernetes.io/projected/5937262c-c601-4bdc-9e86-06900a448320-kube-api-access-n2vl7\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382865 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-log-socket\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382908 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-etc-openvswitch\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.382972 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-systemd-units\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383025 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-run-netns\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383135 4874 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383174 4874 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383202 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9lv4\" (UniqueName: \"kubernetes.io/projected/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9-kube-api-access-t9lv4\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383290 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-run-netns\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383373 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-run-openvswitch\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383472 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-run-systemd\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383530 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-cni-bin\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383589 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-run-ovn\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383682 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-var-lib-openvswitch\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383704 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-etc-openvswitch\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383754 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-log-socket\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383827 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-run-ovn-kubernetes\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383872 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-kubelet\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383827 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383949 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-slash\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383997 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-node-log\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.383874 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-host-cni-netd\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.384200 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5937262c-c601-4bdc-9e86-06900a448320-systemd-units\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.385003 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5937262c-c601-4bdc-9e86-06900a448320-ovnkube-script-lib\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.385107 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5937262c-c601-4bdc-9e86-06900a448320-env-overrides\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.385592 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5937262c-c601-4bdc-9e86-06900a448320-ovnkube-config\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.390789 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5937262c-c601-4bdc-9e86-06900a448320-ovn-node-metrics-cert\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.413653 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vl7\" (UniqueName: \"kubernetes.io/projected/5937262c-c601-4bdc-9e86-06900a448320-kube-api-access-n2vl7\") pod \"ovnkube-node-lbtk2\" (UID: \"5937262c-c601-4bdc-9e86-06900a448320\") " pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.509490 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:06 crc kubenswrapper[4874]: W0122 11:51:06.544141 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5937262c_c601_4bdc_9e86_06900a448320.slice/crio-fbc8c59e1ba96d09e2275be4b4fef1829c84d16c3b415161dad6e72827b47264 WatchSource:0}: Error finding container fbc8c59e1ba96d09e2275be4b4fef1829c84d16c3b415161dad6e72827b47264: Status 404 returned error can't find the container with id fbc8c59e1ba96d09e2275be4b4fef1829c84d16c3b415161dad6e72827b47264 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.656811 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerStarted","Data":"fbc8c59e1ba96d09e2275be4b4fef1829c84d16c3b415161dad6e72827b47264"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.662355 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovnkube-controller/3.log" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.669034 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovn-acl-logging/0.log" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.669760 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6tmll_642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/ovn-controller/0.log" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670356 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161" exitCode=0 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670436 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de" exitCode=0 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670457 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079" exitCode=0 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670472 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6" exitCode=0 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670487 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2" exitCode=0 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670470 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670554 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670580 4874 scope.go:117] "RemoveContainer" containerID="a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670503 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348" exitCode=0 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670653 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba" exitCode=143 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670669 4874 generic.go:334] "Generic (PLEG): container finished" podID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" containerID="3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794" exitCode=143 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670559 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670742 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670770 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670788 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670804 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670845 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670866 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670900 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670907 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670933 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670978 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670985 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670992 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.670998 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671008 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671020 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671028 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671034 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671040 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671046 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671052 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671060 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671066 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671155 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671165 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671175 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671187 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671195 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671201 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671207 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671214 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671506 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671524 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671531 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671537 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671544 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671555 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tmll" event={"ID":"642d0ca0-2e0f-4b69-9484-a63d0a01f8a9","Type":"ContainerDied","Data":"1a325212afdff76674e62ed80b3cf828c221bb977e244657ca644f4020d804af"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671571 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671580 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671587 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671593 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671600 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671607 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671613 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671619 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671626 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.671632 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.673027 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/2.log" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.675806 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/1.log" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.675877 4874 generic.go:334] "Generic (PLEG): container finished" podID="977746b5-ac1b-4b6e-bdbc-ddd90225e68c" containerID="55eeb9abd8c425711e374c107c22ec24d1741880327f226d7db5e06d67925630" exitCode=2 Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.675921 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krrtc" event={"ID":"977746b5-ac1b-4b6e-bdbc-ddd90225e68c","Type":"ContainerDied","Data":"55eeb9abd8c425711e374c107c22ec24d1741880327f226d7db5e06d67925630"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.676068 4874 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8"} Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.676487 4874 scope.go:117] "RemoveContainer" containerID="55eeb9abd8c425711e374c107c22ec24d1741880327f226d7db5e06d67925630" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.676726 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-krrtc_openshift-multus(977746b5-ac1b-4b6e-bdbc-ddd90225e68c)\"" pod="openshift-multus/multus-krrtc" podUID="977746b5-ac1b-4b6e-bdbc-ddd90225e68c" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.697296 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.778073 4874 scope.go:117] "RemoveContainer" containerID="c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.785469 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tmll"] Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.792178 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tmll"] Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.794002 4874 scope.go:117] "RemoveContainer" containerID="eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.806172 4874 scope.go:117] "RemoveContainer" containerID="6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.821312 4874 scope.go:117] "RemoveContainer" containerID="284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.840933 4874 scope.go:117] "RemoveContainer" containerID="32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.854324 4874 scope.go:117] "RemoveContainer" containerID="af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.868230 4874 scope.go:117] "RemoveContainer" containerID="3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.882334 4874 scope.go:117] "RemoveContainer" containerID="f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.895665 4874 scope.go:117] "RemoveContainer" containerID="a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.896088 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": container with ID starting with a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161 not found: ID does not exist" containerID="a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.896135 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} err="failed to get container status \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": rpc error: code = NotFound desc = could not find container \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": container with ID starting with a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.896168 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.896568 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\": container with ID starting with 052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35 not found: ID does not exist" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.896596 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} err="failed to get container status \"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\": rpc error: code = NotFound desc = could not find container \"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\": container with ID starting with 052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.896621 4874 scope.go:117] "RemoveContainer" containerID="c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.897024 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\": container with ID starting with c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de not found: ID does not exist" containerID="c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.897060 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} err="failed to get container status \"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\": rpc error: code = NotFound desc = could not find container \"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\": container with ID starting with c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.897074 4874 scope.go:117] "RemoveContainer" containerID="eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.897473 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\": container with ID starting with eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079 not found: ID does not exist" containerID="eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.897535 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} err="failed to get container status \"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\": rpc error: code = NotFound desc = could not find container \"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\": container with ID starting with eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.897571 4874 scope.go:117] "RemoveContainer" containerID="6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.898077 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\": container with ID starting with 6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6 not found: ID does not exist" containerID="6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.898122 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} err="failed to get container status \"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\": rpc error: code = NotFound desc = could not find container \"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\": container with ID starting with 6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.898141 4874 scope.go:117] "RemoveContainer" containerID="284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.898554 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\": container with ID starting with 284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2 not found: ID does not exist" containerID="284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.898603 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} err="failed to get container status \"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\": rpc error: code = NotFound desc = could not find container \"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\": container with ID starting with 284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.898633 4874 scope.go:117] "RemoveContainer" containerID="32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.898974 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\": container with ID starting with 32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348 not found: ID does not exist" containerID="32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.899013 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} err="failed to get container status \"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\": rpc error: code = NotFound desc = could not find container \"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\": container with ID starting with 32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.899030 4874 scope.go:117] "RemoveContainer" containerID="af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.899323 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\": container with ID starting with af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba not found: ID does not exist" containerID="af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.899354 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} err="failed to get container status \"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\": rpc error: code = NotFound desc = could not find container \"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\": container with ID starting with af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.899370 4874 scope.go:117] "RemoveContainer" containerID="3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.899655 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\": container with ID starting with 3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794 not found: ID does not exist" containerID="3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.899679 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} err="failed to get container status \"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\": rpc error: code = NotFound desc = could not find container \"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\": container with ID starting with 3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.899694 4874 scope.go:117] "RemoveContainer" containerID="f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f" Jan 22 11:51:06 crc kubenswrapper[4874]: E0122 11:51:06.899931 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\": container with ID starting with f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f not found: ID does not exist" containerID="f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.899976 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f"} err="failed to get container status \"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\": rpc error: code = NotFound desc = could not find container \"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\": container with ID starting with f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.900001 4874 scope.go:117] "RemoveContainer" containerID="a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.900276 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} err="failed to get container status \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": rpc error: code = NotFound desc = could not find container \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": container with ID starting with a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.900299 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.900535 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} err="failed to get container status \"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\": rpc error: code = NotFound desc = could not find container \"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\": container with ID starting with 052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.900577 4874 scope.go:117] "RemoveContainer" containerID="c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.901016 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} err="failed to get container status \"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\": rpc error: code = NotFound desc = could not find container \"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\": container with ID starting with c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.901034 4874 scope.go:117] "RemoveContainer" containerID="eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.901361 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} err="failed to get container status \"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\": rpc error: code = NotFound desc = could not find container \"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\": container with ID starting with eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.901378 4874 scope.go:117] "RemoveContainer" containerID="6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.901707 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} err="failed to get container status \"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\": rpc error: code = NotFound desc = could not find container \"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\": container with ID starting with 6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.901740 4874 scope.go:117] "RemoveContainer" containerID="284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.902330 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} err="failed to get container status \"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\": rpc error: code = NotFound desc = could not find container \"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\": container with ID starting with 284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.902443 4874 scope.go:117] "RemoveContainer" containerID="32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.902982 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} err="failed to get container status \"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\": rpc error: code = NotFound desc = could not find container \"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\": container with ID starting with 32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.903022 4874 scope.go:117] "RemoveContainer" containerID="af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.903275 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} err="failed to get container status \"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\": rpc error: code = NotFound desc = could not find container \"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\": container with ID starting with af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.903297 4874 scope.go:117] "RemoveContainer" containerID="3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.903557 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} err="failed to get container status \"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\": rpc error: code = NotFound desc = could not find container \"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\": container with ID starting with 3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.903588 4874 scope.go:117] "RemoveContainer" containerID="f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.903849 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f"} err="failed to get container status \"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\": rpc error: code = NotFound desc = could not find container \"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\": container with ID starting with f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.903867 4874 scope.go:117] "RemoveContainer" containerID="a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.904076 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} err="failed to get container status \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": rpc error: code = NotFound desc = could not find container \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": container with ID starting with a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.904109 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.904366 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} err="failed to get container status \"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\": rpc error: code = NotFound desc = could not find container \"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\": container with ID starting with 052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.904386 4874 scope.go:117] "RemoveContainer" containerID="c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.904645 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} err="failed to get container status \"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\": rpc error: code = NotFound desc = could not find container \"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\": container with ID starting with c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.904676 4874 scope.go:117] "RemoveContainer" containerID="eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.904884 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} err="failed to get container status \"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\": rpc error: code = NotFound desc = could not find container \"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\": container with ID starting with eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.904901 4874 scope.go:117] "RemoveContainer" containerID="6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.905187 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} err="failed to get container status \"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\": rpc error: code = NotFound desc = could not find container \"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\": container with ID starting with 6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.905219 4874 scope.go:117] "RemoveContainer" containerID="284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.905681 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} err="failed to get container status \"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\": rpc error: code = NotFound desc = could not find container \"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\": container with ID starting with 284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.905714 4874 scope.go:117] "RemoveContainer" containerID="32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.905963 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} err="failed to get container status \"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\": rpc error: code = NotFound desc = could not find container \"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\": container with ID starting with 32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.905982 4874 scope.go:117] "RemoveContainer" containerID="af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.906183 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} err="failed to get container status \"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\": rpc error: code = NotFound desc = could not find container \"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\": container with ID starting with af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.906212 4874 scope.go:117] "RemoveContainer" containerID="3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.906492 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} err="failed to get container status \"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\": rpc error: code = NotFound desc = could not find container \"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\": container with ID starting with 3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.906525 4874 scope.go:117] "RemoveContainer" containerID="f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.906801 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f"} err="failed to get container status \"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\": rpc error: code = NotFound desc = could not find container \"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\": container with ID starting with f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.906824 4874 scope.go:117] "RemoveContainer" containerID="a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.907138 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} err="failed to get container status \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": rpc error: code = NotFound desc = could not find container \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": container with ID starting with a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.907182 4874 scope.go:117] "RemoveContainer" containerID="052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.907504 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35"} err="failed to get container status \"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\": rpc error: code = NotFound desc = could not find container \"052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35\": container with ID starting with 052dc6d3925ff9785ba5a4c833cc761b29f083abcaeff3104c5c103c65659b35 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.907524 4874 scope.go:117] "RemoveContainer" containerID="c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.907838 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de"} err="failed to get container status \"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\": rpc error: code = NotFound desc = could not find container \"c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de\": container with ID starting with c8820ecf159f799b4b40af886c3788127aacb2e9f59ae4e5f53af1798da898de not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.907886 4874 scope.go:117] "RemoveContainer" containerID="eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.908337 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079"} err="failed to get container status \"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\": rpc error: code = NotFound desc = could not find container \"eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079\": container with ID starting with eabf4bd0bcaadf2006582497f2d0e6aaf0ff5b4d1be038a58e45e3484e629079 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.908355 4874 scope.go:117] "RemoveContainer" containerID="6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.908618 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6"} err="failed to get container status \"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\": rpc error: code = NotFound desc = could not find container \"6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6\": container with ID starting with 6b440f662ef1c69071309de21c1a20246fd2ae4d01c92086287391a6fe5848d6 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.908656 4874 scope.go:117] "RemoveContainer" containerID="284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.909015 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2"} err="failed to get container status \"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\": rpc error: code = NotFound desc = could not find container \"284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2\": container with ID starting with 284368ec533c5ce597410ad825babc3c53211edce42a7298b0b78d38c510b6f2 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.909050 4874 scope.go:117] "RemoveContainer" containerID="32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.909595 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348"} err="failed to get container status \"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\": rpc error: code = NotFound desc = could not find container \"32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348\": container with ID starting with 32a81acf1a16e21ef8fa656065235d38beb28ea4d4dc28f8519d1d9f10347348 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.909640 4874 scope.go:117] "RemoveContainer" containerID="af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.910179 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba"} err="failed to get container status \"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\": rpc error: code = NotFound desc = could not find container \"af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba\": container with ID starting with af3ca476362022cf4305d74c7f29819abc8a48d52222d171a3056f10e2cb67ba not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.910216 4874 scope.go:117] "RemoveContainer" containerID="3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.910577 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794"} err="failed to get container status \"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\": rpc error: code = NotFound desc = could not find container \"3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794\": container with ID starting with 3d0bdbbba76909afd7862ef9e9ef224eec57d72779dab9160af584b279e14794 not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.910607 4874 scope.go:117] "RemoveContainer" containerID="f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.910927 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f"} err="failed to get container status \"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\": rpc error: code = NotFound desc = could not find container \"f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f\": container with ID starting with f849b97108e8ca007253cf2f4a548c6362f9618e9110a0d4b19e99c0ade6de1f not found: ID does not exist" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.911006 4874 scope.go:117] "RemoveContainer" containerID="a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161" Jan 22 11:51:06 crc kubenswrapper[4874]: I0122 11:51:06.911334 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161"} err="failed to get container status \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": rpc error: code = NotFound desc = could not find container \"a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161\": container with ID starting with a6b6cc9e9262eb8c84117af65760c4bb3d7625a0b7b8b0c3c3eefbcde0b59161 not found: ID does not exist" Jan 22 11:51:07 crc kubenswrapper[4874]: I0122 11:51:07.688237 4874 generic.go:334] "Generic (PLEG): container finished" podID="5937262c-c601-4bdc-9e86-06900a448320" containerID="3ac568eb1b73e9db0db4c813128a52165babb0168916650443a514d5161194ef" exitCode=0 Jan 22 11:51:07 crc kubenswrapper[4874]: I0122 11:51:07.688633 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerDied","Data":"3ac568eb1b73e9db0db4c813128a52165babb0168916650443a514d5161194ef"} Jan 22 11:51:08 crc kubenswrapper[4874]: I0122 11:51:08.701148 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerStarted","Data":"734d12487de38e83a56ed62534d303f239155ed627c43e63605ed1673cd0b059"} Jan 22 11:51:08 crc kubenswrapper[4874]: I0122 11:51:08.701680 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerStarted","Data":"cbb55d3d4711f04a0f6114d8083191131c8f713b613cab320ced8fe2bb210ff7"} Jan 22 11:51:08 crc kubenswrapper[4874]: I0122 11:51:08.701718 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerStarted","Data":"612190b71092305c4861a3b9c5639a0b0a1d425f2488415b6aecfae090a5535b"} Jan 22 11:51:08 crc kubenswrapper[4874]: I0122 11:51:08.701745 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerStarted","Data":"dbf962f2aeb6324d88ba5f19b6dd0a7037260c1fda9bba552aa5be254db0246d"} Jan 22 11:51:08 crc kubenswrapper[4874]: I0122 11:51:08.701770 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerStarted","Data":"84ef83580edfe2cf2b0d6af52e89ea6977f864611b4366deb801a90212c3a208"} Jan 22 11:51:08 crc kubenswrapper[4874]: I0122 11:51:08.701795 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerStarted","Data":"1f0e69d384556682f0a74cbe00ddd73f9942f812445322bb19395eba5cd1705a"} Jan 22 11:51:08 crc kubenswrapper[4874]: I0122 11:51:08.730602 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="642d0ca0-2e0f-4b69-9484-a63d0a01f8a9" path="/var/lib/kubelet/pods/642d0ca0-2e0f-4b69-9484-a63d0a01f8a9/volumes" Jan 22 11:51:11 crc kubenswrapper[4874]: I0122 11:51:11.727636 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerStarted","Data":"af109f291683f6f23b12f6d5917d0dfbde6c90c5596bf42275ab78b1f661182c"} Jan 22 11:51:13 crc kubenswrapper[4874]: I0122 11:51:13.748496 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" event={"ID":"5937262c-c601-4bdc-9e86-06900a448320","Type":"ContainerStarted","Data":"23d543b70510b32616c6db1feceffde9b7debab80e4b5004506d534f966e35c1"} Jan 22 11:51:13 crc kubenswrapper[4874]: I0122 11:51:13.749127 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:13 crc kubenswrapper[4874]: I0122 11:51:13.790445 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:13 crc kubenswrapper[4874]: I0122 11:51:13.796485 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" podStartSLOduration=7.796461033 podStartE2EDuration="7.796461033s" podCreationTimestamp="2026-01-22 11:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:51:13.79350645 +0000 UTC m=+647.638577530" watchObservedRunningTime="2026-01-22 11:51:13.796461033 +0000 UTC m=+647.641532123" Jan 22 11:51:14 crc kubenswrapper[4874]: I0122 11:51:14.754374 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:14 crc kubenswrapper[4874]: I0122 11:51:14.754918 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:14 crc kubenswrapper[4874]: I0122 11:51:14.797065 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:51:19 crc kubenswrapper[4874]: I0122 11:51:19.716551 4874 scope.go:117] "RemoveContainer" containerID="55eeb9abd8c425711e374c107c22ec24d1741880327f226d7db5e06d67925630" Jan 22 11:51:19 crc kubenswrapper[4874]: E0122 11:51:19.716853 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-krrtc_openshift-multus(977746b5-ac1b-4b6e-bdbc-ddd90225e68c)\"" pod="openshift-multus/multus-krrtc" podUID="977746b5-ac1b-4b6e-bdbc-ddd90225e68c" Jan 22 11:51:27 crc kubenswrapper[4874]: I0122 11:51:27.140788 4874 scope.go:117] "RemoveContainer" containerID="cecfbaa0efaf8c435c3409ccad9deaa4cc25167f0b978622d1ab9c949c4024c8" Jan 22 11:51:27 crc kubenswrapper[4874]: I0122 11:51:27.835786 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/2.log" Jan 22 11:51:34 crc kubenswrapper[4874]: I0122 11:51:34.717072 4874 scope.go:117] "RemoveContainer" containerID="55eeb9abd8c425711e374c107c22ec24d1741880327f226d7db5e06d67925630" Jan 22 11:51:35 crc kubenswrapper[4874]: I0122 11:51:35.890021 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/2.log" Jan 22 11:51:35 crc kubenswrapper[4874]: I0122 11:51:35.892227 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krrtc" event={"ID":"977746b5-ac1b-4b6e-bdbc-ddd90225e68c","Type":"ContainerStarted","Data":"c6cb636066ef9548b0f5770eab00a07ede7ca6a9963d9ed16cb71f112e67d961"} Jan 22 11:51:36 crc kubenswrapper[4874]: I0122 11:51:36.540102 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lbtk2" Jan 22 11:52:13 crc kubenswrapper[4874]: I0122 11:52:13.520765 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:52:13 crc kubenswrapper[4874]: I0122 11:52:13.521552 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.303384 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s7gc"] Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.304123 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5s7gc" podUID="37153c5d-6533-4973-99bd-f682b3c148d9" containerName="registry-server" containerID="cri-o://54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a" gracePeriod=30 Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.720877 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.881277 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-catalog-content\") pod \"37153c5d-6533-4973-99bd-f682b3c148d9\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.881481 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-utilities\") pod \"37153c5d-6533-4973-99bd-f682b3c148d9\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.881577 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slv5g\" (UniqueName: \"kubernetes.io/projected/37153c5d-6533-4973-99bd-f682b3c148d9-kube-api-access-slv5g\") pod \"37153c5d-6533-4973-99bd-f682b3c148d9\" (UID: \"37153c5d-6533-4973-99bd-f682b3c148d9\") " Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.883682 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-utilities" (OuterVolumeSpecName: "utilities") pod "37153c5d-6533-4973-99bd-f682b3c148d9" (UID: "37153c5d-6533-4973-99bd-f682b3c148d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.887294 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37153c5d-6533-4973-99bd-f682b3c148d9-kube-api-access-slv5g" (OuterVolumeSpecName: "kube-api-access-slv5g") pod "37153c5d-6533-4973-99bd-f682b3c148d9" (UID: "37153c5d-6533-4973-99bd-f682b3c148d9"). InnerVolumeSpecName "kube-api-access-slv5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.926580 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37153c5d-6533-4973-99bd-f682b3c148d9" (UID: "37153c5d-6533-4973-99bd-f682b3c148d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.983145 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slv5g\" (UniqueName: \"kubernetes.io/projected/37153c5d-6533-4973-99bd-f682b3c148d9-kube-api-access-slv5g\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.983186 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:17 crc kubenswrapper[4874]: I0122 11:52:17.983204 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37153c5d-6533-4973-99bd-f682b3c148d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.192531 4874 generic.go:334] "Generic (PLEG): container finished" podID="37153c5d-6533-4973-99bd-f682b3c148d9" containerID="54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a" exitCode=0 Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.192582 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s7gc" event={"ID":"37153c5d-6533-4973-99bd-f682b3c148d9","Type":"ContainerDied","Data":"54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a"} Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.192628 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s7gc" event={"ID":"37153c5d-6533-4973-99bd-f682b3c148d9","Type":"ContainerDied","Data":"3610f727eca8b59c0ef9ba7a4755e8243ae3a7e6694ae4e499109a32fd98ae07"} Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.192650 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5s7gc" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.192666 4874 scope.go:117] "RemoveContainer" containerID="54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.216690 4874 scope.go:117] "RemoveContainer" containerID="c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.246154 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s7gc"] Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.253999 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s7gc"] Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.254200 4874 scope.go:117] "RemoveContainer" containerID="684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.279004 4874 scope.go:117] "RemoveContainer" containerID="54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a" Jan 22 11:52:18 crc kubenswrapper[4874]: E0122 11:52:18.279655 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a\": container with ID starting with 54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a not found: ID does not exist" containerID="54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.279728 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a"} err="failed to get container status \"54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a\": rpc error: code = NotFound desc = could not find container \"54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a\": container with ID starting with 54db0b1ebec1433a5bd88c09690a6a8851c51a92a0a90d5576ee312fb1dc0d4a not found: ID does not exist" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.279787 4874 scope.go:117] "RemoveContainer" containerID="c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb" Jan 22 11:52:18 crc kubenswrapper[4874]: E0122 11:52:18.280507 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb\": container with ID starting with c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb not found: ID does not exist" containerID="c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.280579 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb"} err="failed to get container status \"c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb\": rpc error: code = NotFound desc = could not find container \"c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb\": container with ID starting with c9c46af6c28993718e9a9e43b2e96b33eb0ad605dab2e2425b80f011941ca8fb not found: ID does not exist" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.280617 4874 scope.go:117] "RemoveContainer" containerID="684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f" Jan 22 11:52:18 crc kubenswrapper[4874]: E0122 11:52:18.281171 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f\": container with ID starting with 684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f not found: ID does not exist" containerID="684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.281222 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f"} err="failed to get container status \"684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f\": rpc error: code = NotFound desc = could not find container \"684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f\": container with ID starting with 684e002757e3dc3f4607d5fe73a314e71a31507700f72d618222be0e988a644f not found: ID does not exist" Jan 22 11:52:18 crc kubenswrapper[4874]: I0122 11:52:18.727197 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37153c5d-6533-4973-99bd-f682b3c148d9" path="/var/lib/kubelet/pods/37153c5d-6533-4973-99bd-f682b3c148d9/volumes" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.531845 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk"] Jan 22 11:52:21 crc kubenswrapper[4874]: E0122 11:52:21.532377 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37153c5d-6533-4973-99bd-f682b3c148d9" containerName="registry-server" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.532427 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="37153c5d-6533-4973-99bd-f682b3c148d9" containerName="registry-server" Jan 22 11:52:21 crc kubenswrapper[4874]: E0122 11:52:21.532444 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37153c5d-6533-4973-99bd-f682b3c148d9" containerName="extract-utilities" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.532452 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="37153c5d-6533-4973-99bd-f682b3c148d9" containerName="extract-utilities" Jan 22 11:52:21 crc kubenswrapper[4874]: E0122 11:52:21.532470 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37153c5d-6533-4973-99bd-f682b3c148d9" containerName="extract-content" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.532478 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="37153c5d-6533-4973-99bd-f682b3c148d9" containerName="extract-content" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.532590 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="37153c5d-6533-4973-99bd-f682b3c148d9" containerName="registry-server" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.533430 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.538031 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.543446 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk"] Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.634573 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.634726 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.634800 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9v47\" (UniqueName: \"kubernetes.io/projected/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-kube-api-access-z9v47\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.736311 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9v47\" (UniqueName: \"kubernetes.io/projected/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-kube-api-access-z9v47\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.736631 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.736720 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.737125 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.737198 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.755458 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9v47\" (UniqueName: \"kubernetes.io/projected/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-kube-api-access-z9v47\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:21 crc kubenswrapper[4874]: I0122 11:52:21.847172 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:22 crc kubenswrapper[4874]: I0122 11:52:22.122625 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk"] Jan 22 11:52:22 crc kubenswrapper[4874]: W0122 11:52:22.141471 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9634e8c9_9571_45b6_ad7f_c7d68d40c75a.slice/crio-881c9c6a19afb1c0a047f5dade7d497915424b9f4b71441df5ffc1a8a17feeb6 WatchSource:0}: Error finding container 881c9c6a19afb1c0a047f5dade7d497915424b9f4b71441df5ffc1a8a17feeb6: Status 404 returned error can't find the container with id 881c9c6a19afb1c0a047f5dade7d497915424b9f4b71441df5ffc1a8a17feeb6 Jan 22 11:52:22 crc kubenswrapper[4874]: I0122 11:52:22.229243 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" event={"ID":"9634e8c9-9571-45b6-ad7f-c7d68d40c75a","Type":"ContainerStarted","Data":"881c9c6a19afb1c0a047f5dade7d497915424b9f4b71441df5ffc1a8a17feeb6"} Jan 22 11:52:23 crc kubenswrapper[4874]: I0122 11:52:23.238903 4874 generic.go:334] "Generic (PLEG): container finished" podID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerID="eae37d2395c60a97142ae4f46bc1135a93e0ddac46d0a371d1cc3b91beea07a3" exitCode=0 Jan 22 11:52:23 crc kubenswrapper[4874]: I0122 11:52:23.238987 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" event={"ID":"9634e8c9-9571-45b6-ad7f-c7d68d40c75a","Type":"ContainerDied","Data":"eae37d2395c60a97142ae4f46bc1135a93e0ddac46d0a371d1cc3b91beea07a3"} Jan 22 11:52:23 crc kubenswrapper[4874]: I0122 11:52:23.241946 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:52:25 crc kubenswrapper[4874]: I0122 11:52:25.259317 4874 generic.go:334] "Generic (PLEG): container finished" podID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerID="14fc0f306c59c75bd148aa958b96e945154cf3e0099f2d9a38c28941880de481" exitCode=0 Jan 22 11:52:25 crc kubenswrapper[4874]: I0122 11:52:25.259415 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" event={"ID":"9634e8c9-9571-45b6-ad7f-c7d68d40c75a","Type":"ContainerDied","Data":"14fc0f306c59c75bd148aa958b96e945154cf3e0099f2d9a38c28941880de481"} Jan 22 11:52:26 crc kubenswrapper[4874]: I0122 11:52:26.272598 4874 generic.go:334] "Generic (PLEG): container finished" podID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerID="c1b49c9d548e2a106274d791e08e9815878bc897190996ea888b365bc8348c48" exitCode=0 Jan 22 11:52:26 crc kubenswrapper[4874]: I0122 11:52:26.272675 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" event={"ID":"9634e8c9-9571-45b6-ad7f-c7d68d40c75a","Type":"ContainerDied","Data":"c1b49c9d548e2a106274d791e08e9815878bc897190996ea888b365bc8348c48"} Jan 22 11:52:27 crc kubenswrapper[4874]: I0122 11:52:27.552522 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:27 crc kubenswrapper[4874]: I0122 11:52:27.724083 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-util\") pod \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " Jan 22 11:52:27 crc kubenswrapper[4874]: I0122 11:52:27.724275 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9v47\" (UniqueName: \"kubernetes.io/projected/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-kube-api-access-z9v47\") pod \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " Jan 22 11:52:27 crc kubenswrapper[4874]: I0122 11:52:27.724451 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-bundle\") pod \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\" (UID: \"9634e8c9-9571-45b6-ad7f-c7d68d40c75a\") " Jan 22 11:52:27 crc kubenswrapper[4874]: I0122 11:52:27.726192 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-bundle" (OuterVolumeSpecName: "bundle") pod "9634e8c9-9571-45b6-ad7f-c7d68d40c75a" (UID: "9634e8c9-9571-45b6-ad7f-c7d68d40c75a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:27 crc kubenswrapper[4874]: I0122 11:52:27.730092 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-kube-api-access-z9v47" (OuterVolumeSpecName: "kube-api-access-z9v47") pod "9634e8c9-9571-45b6-ad7f-c7d68d40c75a" (UID: "9634e8c9-9571-45b6-ad7f-c7d68d40c75a"). InnerVolumeSpecName "kube-api-access-z9v47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:52:27 crc kubenswrapper[4874]: I0122 11:52:27.826811 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9v47\" (UniqueName: \"kubernetes.io/projected/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-kube-api-access-z9v47\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:27 crc kubenswrapper[4874]: I0122 11:52:27.826874 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:27 crc kubenswrapper[4874]: I0122 11:52:27.980569 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-util" (OuterVolumeSpecName: "util") pod "9634e8c9-9571-45b6-ad7f-c7d68d40c75a" (UID: "9634e8c9-9571-45b6-ad7f-c7d68d40c75a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:28 crc kubenswrapper[4874]: I0122 11:52:28.030113 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9634e8c9-9571-45b6-ad7f-c7d68d40c75a-util\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:28 crc kubenswrapper[4874]: I0122 11:52:28.289642 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" Jan 22 11:52:28 crc kubenswrapper[4874]: I0122 11:52:28.289638 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk" event={"ID":"9634e8c9-9571-45b6-ad7f-c7d68d40c75a","Type":"ContainerDied","Data":"881c9c6a19afb1c0a047f5dade7d497915424b9f4b71441df5ffc1a8a17feeb6"} Jan 22 11:52:28 crc kubenswrapper[4874]: I0122 11:52:28.289817 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="881c9c6a19afb1c0a047f5dade7d497915424b9f4b71441df5ffc1a8a17feeb6" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.524449 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7"] Jan 22 11:52:30 crc kubenswrapper[4874]: E0122 11:52:30.525288 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerName="extract" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.525355 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerName="extract" Jan 22 11:52:30 crc kubenswrapper[4874]: E0122 11:52:30.525427 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerName="util" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.525484 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerName="util" Jan 22 11:52:30 crc kubenswrapper[4874]: E0122 11:52:30.525540 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerName="pull" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.525590 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerName="pull" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.525724 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9634e8c9-9571-45b6-ad7f-c7d68d40c75a" containerName="extract" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.526415 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.529157 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.551698 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7"] Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.603081 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.603151 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.603330 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hrrh\" (UniqueName: \"kubernetes.io/projected/adad4abb-e416-4993-a166-0ac45ac75521-kube-api-access-8hrrh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.704049 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.704100 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.704155 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hrrh\" (UniqueName: \"kubernetes.io/projected/adad4abb-e416-4993-a166-0ac45ac75521-kube-api-access-8hrrh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.705058 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.705130 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.739068 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hrrh\" (UniqueName: \"kubernetes.io/projected/adad4abb-e416-4993-a166-0ac45ac75521-kube-api-access-8hrrh\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:30 crc kubenswrapper[4874]: I0122 11:52:30.843596 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.091659 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7"] Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.320864 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" event={"ID":"adad4abb-e416-4993-a166-0ac45ac75521","Type":"ContainerStarted","Data":"e2a672e474a2472d7f351a57371d8c07d2ea6ba5df72efe12375a43e4f3f5689"} Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.323365 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf"] Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.325304 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.331136 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf"] Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.413160 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4np4\" (UniqueName: \"kubernetes.io/projected/4db7503f-969b-4507-81bc-fb6ae0579495-kube-api-access-k4np4\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.413246 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.413335 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.514721 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4np4\" (UniqueName: \"kubernetes.io/projected/4db7503f-969b-4507-81bc-fb6ae0579495-kube-api-access-k4np4\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.514788 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.514832 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.515704 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.515714 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.547079 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4np4\" (UniqueName: \"kubernetes.io/projected/4db7503f-969b-4507-81bc-fb6ae0579495-kube-api-access-k4np4\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:31 crc kubenswrapper[4874]: I0122 11:52:31.640759 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:32 crc kubenswrapper[4874]: I0122 11:52:32.097268 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf"] Jan 22 11:52:32 crc kubenswrapper[4874]: I0122 11:52:32.332574 4874 generic.go:334] "Generic (PLEG): container finished" podID="adad4abb-e416-4993-a166-0ac45ac75521" containerID="80ace2274ec8740864fcc24a9c36c173227b492398f3b830bb5cf776977b640f" exitCode=0 Jan 22 11:52:32 crc kubenswrapper[4874]: I0122 11:52:32.333092 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" event={"ID":"adad4abb-e416-4993-a166-0ac45ac75521","Type":"ContainerDied","Data":"80ace2274ec8740864fcc24a9c36c173227b492398f3b830bb5cf776977b640f"} Jan 22 11:52:32 crc kubenswrapper[4874]: I0122 11:52:32.337690 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" event={"ID":"4db7503f-969b-4507-81bc-fb6ae0579495","Type":"ContainerStarted","Data":"9422670f2856c64b6f212b3001dace3d7fad6ff84246fe60c6d97aed2a878373"} Jan 22 11:52:33 crc kubenswrapper[4874]: I0122 11:52:33.349555 4874 generic.go:334] "Generic (PLEG): container finished" podID="4db7503f-969b-4507-81bc-fb6ae0579495" containerID="3e34f556369d896d056f66ff406929737cade6ea674c92a6bc257901a0bf77a7" exitCode=0 Jan 22 11:52:33 crc kubenswrapper[4874]: I0122 11:52:33.349597 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" event={"ID":"4db7503f-969b-4507-81bc-fb6ae0579495","Type":"ContainerDied","Data":"3e34f556369d896d056f66ff406929737cade6ea674c92a6bc257901a0bf77a7"} Jan 22 11:52:34 crc kubenswrapper[4874]: I0122 11:52:34.356980 4874 generic.go:334] "Generic (PLEG): container finished" podID="adad4abb-e416-4993-a166-0ac45ac75521" containerID="11fb58e2bc00eea5c08b47475d97aa9a4fd2f725b84421da88eb8c5594ad6ad3" exitCode=0 Jan 22 11:52:34 crc kubenswrapper[4874]: I0122 11:52:34.357100 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" event={"ID":"adad4abb-e416-4993-a166-0ac45ac75521","Type":"ContainerDied","Data":"11fb58e2bc00eea5c08b47475d97aa9a4fd2f725b84421da88eb8c5594ad6ad3"} Jan 22 11:52:35 crc kubenswrapper[4874]: I0122 11:52:35.369467 4874 generic.go:334] "Generic (PLEG): container finished" podID="adad4abb-e416-4993-a166-0ac45ac75521" containerID="ff4168d9b04656a0333f2d6281f8d25f2eccae29873f6c3a6e07cc5c3e2e8c25" exitCode=0 Jan 22 11:52:35 crc kubenswrapper[4874]: I0122 11:52:35.369592 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" event={"ID":"adad4abb-e416-4993-a166-0ac45ac75521","Type":"ContainerDied","Data":"ff4168d9b04656a0333f2d6281f8d25f2eccae29873f6c3a6e07cc5c3e2e8c25"} Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.376154 4874 generic.go:334] "Generic (PLEG): container finished" podID="4db7503f-969b-4507-81bc-fb6ae0579495" containerID="0134763dc613f6a61fcd3ffc52abae94a811a88ccffcae4fd175570f3b7d4852" exitCode=0 Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.377180 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" event={"ID":"4db7503f-969b-4507-81bc-fb6ae0579495","Type":"ContainerDied","Data":"0134763dc613f6a61fcd3ffc52abae94a811a88ccffcae4fd175570f3b7d4852"} Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.558671 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5"] Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.560260 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.575683 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5"] Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.604406 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.604445 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.604493 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6tsr\" (UniqueName: \"kubernetes.io/projected/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-kube-api-access-t6tsr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.681772 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.705050 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6tsr\" (UniqueName: \"kubernetes.io/projected/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-kube-api-access-t6tsr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.705124 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.705150 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.705590 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.706346 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.752372 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6tsr\" (UniqueName: \"kubernetes.io/projected/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-kube-api-access-t6tsr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.805598 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-bundle\") pod \"adad4abb-e416-4993-a166-0ac45ac75521\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.805698 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hrrh\" (UniqueName: \"kubernetes.io/projected/adad4abb-e416-4993-a166-0ac45ac75521-kube-api-access-8hrrh\") pod \"adad4abb-e416-4993-a166-0ac45ac75521\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.805759 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-util\") pod \"adad4abb-e416-4993-a166-0ac45ac75521\" (UID: \"adad4abb-e416-4993-a166-0ac45ac75521\") " Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.806525 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-bundle" (OuterVolumeSpecName: "bundle") pod "adad4abb-e416-4993-a166-0ac45ac75521" (UID: "adad4abb-e416-4993-a166-0ac45ac75521"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.809019 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adad4abb-e416-4993-a166-0ac45ac75521-kube-api-access-8hrrh" (OuterVolumeSpecName: "kube-api-access-8hrrh") pod "adad4abb-e416-4993-a166-0ac45ac75521" (UID: "adad4abb-e416-4993-a166-0ac45ac75521"). InnerVolumeSpecName "kube-api-access-8hrrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.823456 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-util" (OuterVolumeSpecName: "util") pod "adad4abb-e416-4993-a166-0ac45ac75521" (UID: "adad4abb-e416-4993-a166-0ac45ac75521"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.890453 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.907206 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-util\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.907235 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adad4abb-e416-4993-a166-0ac45ac75521-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:36 crc kubenswrapper[4874]: I0122 11:52:36.907244 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hrrh\" (UniqueName: \"kubernetes.io/projected/adad4abb-e416-4993-a166-0ac45ac75521-kube-api-access-8hrrh\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:37 crc kubenswrapper[4874]: I0122 11:52:37.158371 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5"] Jan 22 11:52:37 crc kubenswrapper[4874]: W0122 11:52:37.168634 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a48ea4e_8d83_4484_8cab_e9a38e86a2e1.slice/crio-3052c644abdb9c0b9e56a9a8e5229cfd6a7be0945646a4a78aa60b838b8c13d8 WatchSource:0}: Error finding container 3052c644abdb9c0b9e56a9a8e5229cfd6a7be0945646a4a78aa60b838b8c13d8: Status 404 returned error can't find the container with id 3052c644abdb9c0b9e56a9a8e5229cfd6a7be0945646a4a78aa60b838b8c13d8 Jan 22 11:52:37 crc kubenswrapper[4874]: I0122 11:52:37.382824 4874 generic.go:334] "Generic (PLEG): container finished" podID="4db7503f-969b-4507-81bc-fb6ae0579495" containerID="0d803dc33d5b1561687c54af4a3c7e656bf69906e6220e94080cd3625f5561bb" exitCode=0 Jan 22 11:52:37 crc kubenswrapper[4874]: I0122 11:52:37.382876 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" event={"ID":"4db7503f-969b-4507-81bc-fb6ae0579495","Type":"ContainerDied","Data":"0d803dc33d5b1561687c54af4a3c7e656bf69906e6220e94080cd3625f5561bb"} Jan 22 11:52:37 crc kubenswrapper[4874]: I0122 11:52:37.384755 4874 generic.go:334] "Generic (PLEG): container finished" podID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerID="26ee57824a8248b7caf445e3d93c945e9156e940ffa5f1516e4ab1cefcd20ae4" exitCode=0 Jan 22 11:52:37 crc kubenswrapper[4874]: I0122 11:52:37.384842 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" event={"ID":"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1","Type":"ContainerDied","Data":"26ee57824a8248b7caf445e3d93c945e9156e940ffa5f1516e4ab1cefcd20ae4"} Jan 22 11:52:37 crc kubenswrapper[4874]: I0122 11:52:37.384897 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" event={"ID":"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1","Type":"ContainerStarted","Data":"3052c644abdb9c0b9e56a9a8e5229cfd6a7be0945646a4a78aa60b838b8c13d8"} Jan 22 11:52:37 crc kubenswrapper[4874]: I0122 11:52:37.388810 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" event={"ID":"adad4abb-e416-4993-a166-0ac45ac75521","Type":"ContainerDied","Data":"e2a672e474a2472d7f351a57371d8c07d2ea6ba5df72efe12375a43e4f3f5689"} Jan 22 11:52:37 crc kubenswrapper[4874]: I0122 11:52:37.388843 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a672e474a2472d7f351a57371d8c07d2ea6ba5df72efe12375a43e4f3f5689" Jan 22 11:52:37 crc kubenswrapper[4874]: I0122 11:52:37.388897 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7" Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.602818 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.728429 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-bundle\") pod \"4db7503f-969b-4507-81bc-fb6ae0579495\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.730020 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-util\") pod \"4db7503f-969b-4507-81bc-fb6ae0579495\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.730170 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4np4\" (UniqueName: \"kubernetes.io/projected/4db7503f-969b-4507-81bc-fb6ae0579495-kube-api-access-k4np4\") pod \"4db7503f-969b-4507-81bc-fb6ae0579495\" (UID: \"4db7503f-969b-4507-81bc-fb6ae0579495\") " Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.730984 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-bundle" (OuterVolumeSpecName: "bundle") pod "4db7503f-969b-4507-81bc-fb6ae0579495" (UID: "4db7503f-969b-4507-81bc-fb6ae0579495"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.731469 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.748636 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4db7503f-969b-4507-81bc-fb6ae0579495-kube-api-access-k4np4" (OuterVolumeSpecName: "kube-api-access-k4np4") pod "4db7503f-969b-4507-81bc-fb6ae0579495" (UID: "4db7503f-969b-4507-81bc-fb6ae0579495"). InnerVolumeSpecName "kube-api-access-k4np4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.752665 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-util" (OuterVolumeSpecName: "util") pod "4db7503f-969b-4507-81bc-fb6ae0579495" (UID: "4db7503f-969b-4507-81bc-fb6ae0579495"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.832346 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4np4\" (UniqueName: \"kubernetes.io/projected/4db7503f-969b-4507-81bc-fb6ae0579495-kube-api-access-k4np4\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:38 crc kubenswrapper[4874]: I0122 11:52:38.832407 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4db7503f-969b-4507-81bc-fb6ae0579495-util\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:39 crc kubenswrapper[4874]: I0122 11:52:39.406468 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" event={"ID":"4db7503f-969b-4507-81bc-fb6ae0579495","Type":"ContainerDied","Data":"9422670f2856c64b6f212b3001dace3d7fad6ff84246fe60c6d97aed2a878373"} Jan 22 11:52:39 crc kubenswrapper[4874]: I0122 11:52:39.406508 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9422670f2856c64b6f212b3001dace3d7fad6ff84246fe60c6d97aed2a878373" Jan 22 11:52:39 crc kubenswrapper[4874]: I0122 11:52:39.406529 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.221385 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89"] Jan 22 11:52:40 crc kubenswrapper[4874]: E0122 11:52:40.221651 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adad4abb-e416-4993-a166-0ac45ac75521" containerName="pull" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.221668 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="adad4abb-e416-4993-a166-0ac45ac75521" containerName="pull" Jan 22 11:52:40 crc kubenswrapper[4874]: E0122 11:52:40.221685 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adad4abb-e416-4993-a166-0ac45ac75521" containerName="extract" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.221692 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="adad4abb-e416-4993-a166-0ac45ac75521" containerName="extract" Jan 22 11:52:40 crc kubenswrapper[4874]: E0122 11:52:40.221702 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adad4abb-e416-4993-a166-0ac45ac75521" containerName="util" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.221710 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="adad4abb-e416-4993-a166-0ac45ac75521" containerName="util" Jan 22 11:52:40 crc kubenswrapper[4874]: E0122 11:52:40.221720 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db7503f-969b-4507-81bc-fb6ae0579495" containerName="pull" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.221727 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db7503f-969b-4507-81bc-fb6ae0579495" containerName="pull" Jan 22 11:52:40 crc kubenswrapper[4874]: E0122 11:52:40.221738 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db7503f-969b-4507-81bc-fb6ae0579495" containerName="util" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.221744 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db7503f-969b-4507-81bc-fb6ae0579495" containerName="util" Jan 22 11:52:40 crc kubenswrapper[4874]: E0122 11:52:40.221760 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db7503f-969b-4507-81bc-fb6ae0579495" containerName="extract" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.221767 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db7503f-969b-4507-81bc-fb6ae0579495" containerName="extract" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.221898 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db7503f-969b-4507-81bc-fb6ae0579495" containerName="extract" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.221907 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="adad4abb-e416-4993-a166-0ac45ac75521" containerName="extract" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.222307 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.224387 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.225017 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-7hlw4" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.225207 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.233378 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89"] Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.355048 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzc57\" (UniqueName: \"kubernetes.io/projected/71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2-kube-api-access-lzc57\") pod \"obo-prometheus-operator-68bc856cb9-w7v89\" (UID: \"71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.393444 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8"] Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.394184 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.396384 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.396790 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-blzqr" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.398494 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr"] Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.399237 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.407902 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr"] Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.411306 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8"] Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.456797 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzc57\" (UniqueName: \"kubernetes.io/projected/71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2-kube-api-access-lzc57\") pod \"obo-prometheus-operator-68bc856cb9-w7v89\" (UID: \"71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.474832 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzc57\" (UniqueName: \"kubernetes.io/projected/71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2-kube-api-access-lzc57\") pod \"obo-prometheus-operator-68bc856cb9-w7v89\" (UID: \"71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.547469 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.558124 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a18bf90-2af7-4296-9416-1368e89a8a03-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr\" (UID: \"7a18bf90-2af7-4296-9416-1368e89a8a03\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.558186 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a18bf90-2af7-4296-9416-1368e89a8a03-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr\" (UID: \"7a18bf90-2af7-4296-9416-1368e89a8a03\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.558293 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60c00b52-d6b8-440e-9e60-966b44a87577-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8\" (UID: \"60c00b52-d6b8-440e-9e60-966b44a87577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.558342 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60c00b52-d6b8-440e-9e60-966b44a87577-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8\" (UID: \"60c00b52-d6b8-440e-9e60-966b44a87577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.573329 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-jblsg"] Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.574067 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.575867 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-9msn9" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.576934 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.585314 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-jblsg"] Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.660066 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a18bf90-2af7-4296-9416-1368e89a8a03-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr\" (UID: \"7a18bf90-2af7-4296-9416-1368e89a8a03\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.660157 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60c00b52-d6b8-440e-9e60-966b44a87577-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8\" (UID: \"60c00b52-d6b8-440e-9e60-966b44a87577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.660208 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60c00b52-d6b8-440e-9e60-966b44a87577-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8\" (UID: \"60c00b52-d6b8-440e-9e60-966b44a87577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.660255 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a18bf90-2af7-4296-9416-1368e89a8a03-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr\" (UID: \"7a18bf90-2af7-4296-9416-1368e89a8a03\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.664718 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a18bf90-2af7-4296-9416-1368e89a8a03-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr\" (UID: \"7a18bf90-2af7-4296-9416-1368e89a8a03\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.665796 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a18bf90-2af7-4296-9416-1368e89a8a03-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr\" (UID: \"7a18bf90-2af7-4296-9416-1368e89a8a03\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.666714 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-kqrkd"] Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.667518 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.669147 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60c00b52-d6b8-440e-9e60-966b44a87577-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8\" (UID: \"60c00b52-d6b8-440e-9e60-966b44a87577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.669159 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60c00b52-d6b8-440e-9e60-966b44a87577-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8\" (UID: \"60c00b52-d6b8-440e-9e60-966b44a87577\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.669448 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-jpmjt" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.684305 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-kqrkd"] Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.708996 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.728030 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.761671 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/edd7744e-1336-486a-90de-6568bae7f788-observability-operator-tls\") pod \"observability-operator-59bdc8b94-jblsg\" (UID: \"edd7744e-1336-486a-90de-6568bae7f788\") " pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.761735 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgkks\" (UniqueName: \"kubernetes.io/projected/edd7744e-1336-486a-90de-6568bae7f788-kube-api-access-mgkks\") pod \"observability-operator-59bdc8b94-jblsg\" (UID: \"edd7744e-1336-486a-90de-6568bae7f788\") " pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.862549 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/97207973-f6a4-4068-b777-c964c12092fd-openshift-service-ca\") pod \"perses-operator-5bf474d74f-kqrkd\" (UID: \"97207973-f6a4-4068-b777-c964c12092fd\") " pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.862884 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgkks\" (UniqueName: \"kubernetes.io/projected/edd7744e-1336-486a-90de-6568bae7f788-kube-api-access-mgkks\") pod \"observability-operator-59bdc8b94-jblsg\" (UID: \"edd7744e-1336-486a-90de-6568bae7f788\") " pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.862923 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n25b\" (UniqueName: \"kubernetes.io/projected/97207973-f6a4-4068-b777-c964c12092fd-kube-api-access-4n25b\") pod \"perses-operator-5bf474d74f-kqrkd\" (UID: \"97207973-f6a4-4068-b777-c964c12092fd\") " pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.862979 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/edd7744e-1336-486a-90de-6568bae7f788-observability-operator-tls\") pod \"observability-operator-59bdc8b94-jblsg\" (UID: \"edd7744e-1336-486a-90de-6568bae7f788\") " pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.874278 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/edd7744e-1336-486a-90de-6568bae7f788-observability-operator-tls\") pod \"observability-operator-59bdc8b94-jblsg\" (UID: \"edd7744e-1336-486a-90de-6568bae7f788\") " pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.878251 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgkks\" (UniqueName: \"kubernetes.io/projected/edd7744e-1336-486a-90de-6568bae7f788-kube-api-access-mgkks\") pod \"observability-operator-59bdc8b94-jblsg\" (UID: \"edd7744e-1336-486a-90de-6568bae7f788\") " pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.897762 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.964748 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/97207973-f6a4-4068-b777-c964c12092fd-openshift-service-ca\") pod \"perses-operator-5bf474d74f-kqrkd\" (UID: \"97207973-f6a4-4068-b777-c964c12092fd\") " pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.964829 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n25b\" (UniqueName: \"kubernetes.io/projected/97207973-f6a4-4068-b777-c964c12092fd-kube-api-access-4n25b\") pod \"perses-operator-5bf474d74f-kqrkd\" (UID: \"97207973-f6a4-4068-b777-c964c12092fd\") " pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.966147 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/97207973-f6a4-4068-b777-c964c12092fd-openshift-service-ca\") pod \"perses-operator-5bf474d74f-kqrkd\" (UID: \"97207973-f6a4-4068-b777-c964c12092fd\") " pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:52:40 crc kubenswrapper[4874]: I0122 11:52:40.980845 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n25b\" (UniqueName: \"kubernetes.io/projected/97207973-f6a4-4068-b777-c964c12092fd-kube-api-access-4n25b\") pod \"perses-operator-5bf474d74f-kqrkd\" (UID: \"97207973-f6a4-4068-b777-c964c12092fd\") " pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:52:41 crc kubenswrapper[4874]: I0122 11:52:41.034494 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:52:42 crc kubenswrapper[4874]: I0122 11:52:42.775912 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr"] Jan 22 11:52:42 crc kubenswrapper[4874]: I0122 11:52:42.907047 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89"] Jan 22 11:52:42 crc kubenswrapper[4874]: I0122 11:52:42.963808 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-kqrkd"] Jan 22 11:52:43 crc kubenswrapper[4874]: I0122 11:52:43.009188 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8"] Jan 22 11:52:43 crc kubenswrapper[4874]: W0122 11:52:43.020697 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60c00b52_d6b8_440e_9e60_966b44a87577.slice/crio-071374853f0b22bfa420a0207169cb3856cdc1d32c1f0b80507f25cd82952c66 WatchSource:0}: Error finding container 071374853f0b22bfa420a0207169cb3856cdc1d32c1f0b80507f25cd82952c66: Status 404 returned error can't find the container with id 071374853f0b22bfa420a0207169cb3856cdc1d32c1f0b80507f25cd82952c66 Jan 22 11:52:43 crc kubenswrapper[4874]: I0122 11:52:43.053008 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-jblsg"] Jan 22 11:52:43 crc kubenswrapper[4874]: W0122 11:52:43.059814 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedd7744e_1336_486a_90de_6568bae7f788.slice/crio-84faddd9561c05f9272a73c715714a774feee2a07caebf106d514f9188949bec WatchSource:0}: Error finding container 84faddd9561c05f9272a73c715714a774feee2a07caebf106d514f9188949bec: Status 404 returned error can't find the container with id 84faddd9561c05f9272a73c715714a774feee2a07caebf106d514f9188949bec Jan 22 11:52:43 crc kubenswrapper[4874]: I0122 11:52:43.425999 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" event={"ID":"97207973-f6a4-4068-b777-c964c12092fd","Type":"ContainerStarted","Data":"af7d56521f22776d7a26cbea5573c03beeed3c46a601ea393e183b076b52007a"} Jan 22 11:52:43 crc kubenswrapper[4874]: I0122 11:52:43.427116 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89" event={"ID":"71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2","Type":"ContainerStarted","Data":"2939ac2a943208155d80f64dba9c1fe0e1633524cc7581fc1b9217a8fb92bff3"} Jan 22 11:52:43 crc kubenswrapper[4874]: I0122 11:52:43.428177 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" event={"ID":"60c00b52-d6b8-440e-9e60-966b44a87577","Type":"ContainerStarted","Data":"071374853f0b22bfa420a0207169cb3856cdc1d32c1f0b80507f25cd82952c66"} Jan 22 11:52:43 crc kubenswrapper[4874]: I0122 11:52:43.429656 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-jblsg" event={"ID":"edd7744e-1336-486a-90de-6568bae7f788","Type":"ContainerStarted","Data":"84faddd9561c05f9272a73c715714a774feee2a07caebf106d514f9188949bec"} Jan 22 11:52:43 crc kubenswrapper[4874]: I0122 11:52:43.431093 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" event={"ID":"7a18bf90-2af7-4296-9416-1368e89a8a03","Type":"ContainerStarted","Data":"ee76d877051ccfd2790773e2f151b9634ad8539a8628521bb17f302f5b024389"} Jan 22 11:52:43 crc kubenswrapper[4874]: I0122 11:52:43.520438 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:52:43 crc kubenswrapper[4874]: I0122 11:52:43.520815 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:52:45 crc kubenswrapper[4874]: I0122 11:52:45.446645 4874 generic.go:334] "Generic (PLEG): container finished" podID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerID="8f4fba73edd5997fdd330668ad51504fc6f4795841dfda0808d04ee3f53a0555" exitCode=0 Jan 22 11:52:45 crc kubenswrapper[4874]: I0122 11:52:45.446830 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" event={"ID":"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1","Type":"ContainerDied","Data":"8f4fba73edd5997fdd330668ad51504fc6f4795841dfda0808d04ee3f53a0555"} Jan 22 11:52:46 crc kubenswrapper[4874]: I0122 11:52:46.457128 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" event={"ID":"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1","Type":"ContainerStarted","Data":"fc638188b36fb44914ecdf386b1c4e8e730c00e385e24ba36def328ef5b5bbcb"} Jan 22 11:52:46 crc kubenswrapper[4874]: I0122 11:52:46.481119 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" podStartSLOduration=3.077951051 podStartE2EDuration="10.481091941s" podCreationTimestamp="2026-01-22 11:52:36 +0000 UTC" firstStartedPulling="2026-01-22 11:52:37.387343059 +0000 UTC m=+731.232414129" lastFinishedPulling="2026-01-22 11:52:44.790483949 +0000 UTC m=+738.635555019" observedRunningTime="2026-01-22 11:52:46.477103278 +0000 UTC m=+740.322174368" watchObservedRunningTime="2026-01-22 11:52:46.481091941 +0000 UTC m=+740.326163031" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.291690 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-864f7dd768-szq44"] Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.292562 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.296831 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.297114 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.297281 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-gmlh6" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.297503 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.322258 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-864f7dd768-szq44"] Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.458575 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b54bc0d4-cf91-46af-bc4b-ea963cbd59bf-webhook-cert\") pod \"elastic-operator-864f7dd768-szq44\" (UID: \"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf\") " pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.458673 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b54bc0d4-cf91-46af-bc4b-ea963cbd59bf-apiservice-cert\") pod \"elastic-operator-864f7dd768-szq44\" (UID: \"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf\") " pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.458768 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6l9\" (UniqueName: \"kubernetes.io/projected/b54bc0d4-cf91-46af-bc4b-ea963cbd59bf-kube-api-access-db6l9\") pod \"elastic-operator-864f7dd768-szq44\" (UID: \"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf\") " pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.472435 4874 generic.go:334] "Generic (PLEG): container finished" podID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerID="fc638188b36fb44914ecdf386b1c4e8e730c00e385e24ba36def328ef5b5bbcb" exitCode=0 Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.472485 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" event={"ID":"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1","Type":"ContainerDied","Data":"fc638188b36fb44914ecdf386b1c4e8e730c00e385e24ba36def328ef5b5bbcb"} Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.560011 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db6l9\" (UniqueName: \"kubernetes.io/projected/b54bc0d4-cf91-46af-bc4b-ea963cbd59bf-kube-api-access-db6l9\") pod \"elastic-operator-864f7dd768-szq44\" (UID: \"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf\") " pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.560905 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b54bc0d4-cf91-46af-bc4b-ea963cbd59bf-webhook-cert\") pod \"elastic-operator-864f7dd768-szq44\" (UID: \"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf\") " pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.560983 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b54bc0d4-cf91-46af-bc4b-ea963cbd59bf-apiservice-cert\") pod \"elastic-operator-864f7dd768-szq44\" (UID: \"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf\") " pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.573492 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b54bc0d4-cf91-46af-bc4b-ea963cbd59bf-webhook-cert\") pod \"elastic-operator-864f7dd768-szq44\" (UID: \"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf\") " pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.581260 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db6l9\" (UniqueName: \"kubernetes.io/projected/b54bc0d4-cf91-46af-bc4b-ea963cbd59bf-kube-api-access-db6l9\") pod \"elastic-operator-864f7dd768-szq44\" (UID: \"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf\") " pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.582255 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b54bc0d4-cf91-46af-bc4b-ea963cbd59bf-apiservice-cert\") pod \"elastic-operator-864f7dd768-szq44\" (UID: \"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf\") " pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:47 crc kubenswrapper[4874]: I0122 11:52:47.628829 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-864f7dd768-szq44" Jan 22 11:52:48 crc kubenswrapper[4874]: I0122 11:52:48.152229 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-864f7dd768-szq44"] Jan 22 11:52:48 crc kubenswrapper[4874]: I0122 11:52:48.484124 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-864f7dd768-szq44" event={"ID":"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf","Type":"ContainerStarted","Data":"2a3fc82ab4d3e0219b64906ab76f1bf5d73445b7dc86b19fab025743848cb272"} Jan 22 11:52:48 crc kubenswrapper[4874]: I0122 11:52:48.986180 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.100789 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6tsr\" (UniqueName: \"kubernetes.io/projected/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-kube-api-access-t6tsr\") pod \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.101114 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-util\") pod \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.101145 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-bundle\") pod \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\" (UID: \"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1\") " Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.102289 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-bundle" (OuterVolumeSpecName: "bundle") pod "9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" (UID: "9a48ea4e-8d83-4484-8cab-e9a38e86a2e1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.106216 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-kube-api-access-t6tsr" (OuterVolumeSpecName: "kube-api-access-t6tsr") pod "9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" (UID: "9a48ea4e-8d83-4484-8cab-e9a38e86a2e1"). InnerVolumeSpecName "kube-api-access-t6tsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.112202 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-util" (OuterVolumeSpecName: "util") pod "9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" (UID: "9a48ea4e-8d83-4484-8cab-e9a38e86a2e1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.202588 4874 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-util\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.202620 4874 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.202629 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6tsr\" (UniqueName: \"kubernetes.io/projected/9a48ea4e-8d83-4484-8cab-e9a38e86a2e1-kube-api-access-t6tsr\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.492245 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" event={"ID":"9a48ea4e-8d83-4484-8cab-e9a38e86a2e1","Type":"ContainerDied","Data":"3052c644abdb9c0b9e56a9a8e5229cfd6a7be0945646a4a78aa60b838b8c13d8"} Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.492278 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3052c644abdb9c0b9e56a9a8e5229cfd6a7be0945646a4a78aa60b838b8c13d8" Jan 22 11:52:49 crc kubenswrapper[4874]: I0122 11:52:49.492331 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.062673 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-p9brk"] Jan 22 11:52:50 crc kubenswrapper[4874]: E0122 11:52:50.062878 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerName="extract" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.062890 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerName="extract" Jan 22 11:52:50 crc kubenswrapper[4874]: E0122 11:52:50.062907 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerName="util" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.062913 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerName="util" Jan 22 11:52:50 crc kubenswrapper[4874]: E0122 11:52:50.062921 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerName="pull" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.062928 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerName="pull" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.063018 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a48ea4e-8d83-4484-8cab-e9a38e86a2e1" containerName="extract" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.063363 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-p9brk" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.067536 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-kgzql" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.083431 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-p9brk"] Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.129103 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhpmr\" (UniqueName: \"kubernetes.io/projected/cd733750-1915-4813-9e44-ec3777ce9c53-kube-api-access-vhpmr\") pod \"interconnect-operator-5bb49f789d-p9brk\" (UID: \"cd733750-1915-4813-9e44-ec3777ce9c53\") " pod="service-telemetry/interconnect-operator-5bb49f789d-p9brk" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.230222 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhpmr\" (UniqueName: \"kubernetes.io/projected/cd733750-1915-4813-9e44-ec3777ce9c53-kube-api-access-vhpmr\") pod \"interconnect-operator-5bb49f789d-p9brk\" (UID: \"cd733750-1915-4813-9e44-ec3777ce9c53\") " pod="service-telemetry/interconnect-operator-5bb49f789d-p9brk" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.265606 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhpmr\" (UniqueName: \"kubernetes.io/projected/cd733750-1915-4813-9e44-ec3777ce9c53-kube-api-access-vhpmr\") pod \"interconnect-operator-5bb49f789d-p9brk\" (UID: \"cd733750-1915-4813-9e44-ec3777ce9c53\") " pod="service-telemetry/interconnect-operator-5bb49f789d-p9brk" Jan 22 11:52:50 crc kubenswrapper[4874]: I0122 11:52:50.374364 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-p9brk" Jan 22 11:52:55 crc kubenswrapper[4874]: I0122 11:52:55.502983 4874 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 11:52:59 crc kubenswrapper[4874]: W0122 11:52:59.095784 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd733750_1915_4813_9e44_ec3777ce9c53.slice/crio-1d443ee2c729b02d94ef1b22ae25a71170a0d4e111425d5991a0bbde175c0210 WatchSource:0}: Error finding container 1d443ee2c729b02d94ef1b22ae25a71170a0d4e111425d5991a0bbde175c0210: Status 404 returned error can't find the container with id 1d443ee2c729b02d94ef1b22ae25a71170a0d4e111425d5991a0bbde175c0210 Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.095826 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-p9brk"] Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.574313 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" event={"ID":"97207973-f6a4-4068-b777-c964c12092fd","Type":"ContainerStarted","Data":"2d721160c38f80fe5e62eb37a3c92e1d8f8a21b4cd2a62f552f221e4ea494537"} Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.574588 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.576272 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89" event={"ID":"71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2","Type":"ContainerStarted","Data":"97749ab37a11c6d60882b8081be341755317bf7f62aeec71115ed701a862b8d5"} Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.577752 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" event={"ID":"60c00b52-d6b8-440e-9e60-966b44a87577","Type":"ContainerStarted","Data":"ace16ec90ba20ee4aa51e800b37f8468cb0eda193e34cfc51a6f3ab39f0f8ea3"} Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.579233 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-p9brk" event={"ID":"cd733750-1915-4813-9e44-ec3777ce9c53","Type":"ContainerStarted","Data":"1d443ee2c729b02d94ef1b22ae25a71170a0d4e111425d5991a0bbde175c0210"} Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.580801 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-864f7dd768-szq44" event={"ID":"b54bc0d4-cf91-46af-bc4b-ea963cbd59bf","Type":"ContainerStarted","Data":"8b257cf2404e4806872fd315593bafd9a67855ffb0ab16f030c72d3a7298a7cb"} Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.582855 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-jblsg" event={"ID":"edd7744e-1336-486a-90de-6568bae7f788","Type":"ContainerStarted","Data":"82c9ab721e7f28c9a44ad0007f0e88d66266eda8f1cc1ee9d707685d33737fbd"} Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.583048 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.584469 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" event={"ID":"7a18bf90-2af7-4296-9416-1368e89a8a03","Type":"ContainerStarted","Data":"3081848caeb9b63e9b89af3d3c8d8dfb432a0f37c0ba57d07f79e3b89fd37bd6"} Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.586153 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-jblsg" Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.599447 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" podStartSLOduration=4.000512991 podStartE2EDuration="19.59943077s" podCreationTimestamp="2026-01-22 11:52:40 +0000 UTC" firstStartedPulling="2026-01-22 11:52:42.979897693 +0000 UTC m=+736.824968763" lastFinishedPulling="2026-01-22 11:52:58.578815472 +0000 UTC m=+752.423886542" observedRunningTime="2026-01-22 11:52:59.59652729 +0000 UTC m=+753.441598370" watchObservedRunningTime="2026-01-22 11:52:59.59943077 +0000 UTC m=+753.444501850" Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.623555 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr" podStartSLOduration=3.91816611 podStartE2EDuration="19.623533495s" podCreationTimestamp="2026-01-22 11:52:40 +0000 UTC" firstStartedPulling="2026-01-22 11:52:42.805558241 +0000 UTC m=+736.650629321" lastFinishedPulling="2026-01-22 11:52:58.510925636 +0000 UTC m=+752.355996706" observedRunningTime="2026-01-22 11:52:59.620714418 +0000 UTC m=+753.465785498" watchObservedRunningTime="2026-01-22 11:52:59.623533495 +0000 UTC m=+753.468604575" Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.671175 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-jblsg" podStartSLOduration=4.060187264 podStartE2EDuration="19.671156495s" podCreationTimestamp="2026-01-22 11:52:40 +0000 UTC" firstStartedPulling="2026-01-22 11:52:43.067570859 +0000 UTC m=+736.912641929" lastFinishedPulling="2026-01-22 11:52:58.67854009 +0000 UTC m=+752.523611160" observedRunningTime="2026-01-22 11:52:59.669894975 +0000 UTC m=+753.514966045" watchObservedRunningTime="2026-01-22 11:52:59.671156495 +0000 UTC m=+753.516227565" Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.676317 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w7v89" podStartSLOduration=4.021122368 podStartE2EDuration="19.676302764s" podCreationTimestamp="2026-01-22 11:52:40 +0000 UTC" firstStartedPulling="2026-01-22 11:52:42.922614854 +0000 UTC m=+736.767685924" lastFinishedPulling="2026-01-22 11:52:58.57779526 +0000 UTC m=+752.422866320" observedRunningTime="2026-01-22 11:52:59.644593694 +0000 UTC m=+753.489664784" watchObservedRunningTime="2026-01-22 11:52:59.676302764 +0000 UTC m=+753.521373834" Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.705046 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-864f7dd768-szq44" podStartSLOduration=2.283377763 podStartE2EDuration="12.70502919s" podCreationTimestamp="2026-01-22 11:52:47 +0000 UTC" firstStartedPulling="2026-01-22 11:52:48.188360658 +0000 UTC m=+742.033431728" lastFinishedPulling="2026-01-22 11:52:58.610012085 +0000 UTC m=+752.455083155" observedRunningTime="2026-01-22 11:52:59.700744888 +0000 UTC m=+753.545815958" watchObservedRunningTime="2026-01-22 11:52:59.70502919 +0000 UTC m=+753.550100260" Jan 22 11:52:59 crc kubenswrapper[4874]: I0122 11:52:59.732087 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8" podStartSLOduration=4.175091831 podStartE2EDuration="19.732068505s" podCreationTimestamp="2026-01-22 11:52:40 +0000 UTC" firstStartedPulling="2026-01-22 11:52:43.023246871 +0000 UTC m=+736.868317941" lastFinishedPulling="2026-01-22 11:52:58.580223545 +0000 UTC m=+752.425294615" observedRunningTime="2026-01-22 11:52:59.728332419 +0000 UTC m=+753.573403489" watchObservedRunningTime="2026-01-22 11:52:59.732068505 +0000 UTC m=+753.577139575" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.479742 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.481110 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.483664 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.483785 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.483898 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.484923 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.485738 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.488659 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-5k545" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.489208 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.489260 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.491173 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.523268 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617563 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617627 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617706 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617731 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617777 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617815 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617886 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617911 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617931 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.617949 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.618048 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.618089 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.618114 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.618133 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.618148 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.718853 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.718914 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.718937 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.718972 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719003 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719030 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719045 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719061 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719079 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719098 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719115 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719130 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719145 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719169 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.719196 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.720303 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.720443 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.720530 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.720668 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.720755 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.720903 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.721692 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.721799 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.724405 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.724624 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.724662 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.725121 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.725263 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.738059 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.738189 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c9fa7f48-985a-46b3-a7dc-1b39dfc14243-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c9fa7f48-985a-46b3-a7dc-1b39dfc14243\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:02 crc kubenswrapper[4874]: I0122 11:53:02.798587 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:03 crc kubenswrapper[4874]: I0122 11:53:03.232462 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.327730 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb"] Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.328530 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.331474 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.331615 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.333128 4874 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-v456v" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.342440 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb"] Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.458836 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql4qt\" (UniqueName: \"kubernetes.io/projected/e37c03ae-9bcf-4538-9aba-072106273e29-kube-api-access-ql4qt\") pod \"cert-manager-operator-controller-manager-5446d6888b-zhfmb\" (UID: \"e37c03ae-9bcf-4538-9aba-072106273e29\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.459248 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e37c03ae-9bcf-4538-9aba-072106273e29-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-zhfmb\" (UID: \"e37c03ae-9bcf-4538-9aba-072106273e29\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.560189 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e37c03ae-9bcf-4538-9aba-072106273e29-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-zhfmb\" (UID: \"e37c03ae-9bcf-4538-9aba-072106273e29\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.560285 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql4qt\" (UniqueName: \"kubernetes.io/projected/e37c03ae-9bcf-4538-9aba-072106273e29-kube-api-access-ql4qt\") pod \"cert-manager-operator-controller-manager-5446d6888b-zhfmb\" (UID: \"e37c03ae-9bcf-4538-9aba-072106273e29\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.561007 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e37c03ae-9bcf-4538-9aba-072106273e29-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-zhfmb\" (UID: \"e37c03ae-9bcf-4538-9aba-072106273e29\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.581581 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql4qt\" (UniqueName: \"kubernetes.io/projected/e37c03ae-9bcf-4538-9aba-072106273e29-kube-api-access-ql4qt\") pod \"cert-manager-operator-controller-manager-5446d6888b-zhfmb\" (UID: \"e37c03ae-9bcf-4538-9aba-072106273e29\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" Jan 22 11:53:05 crc kubenswrapper[4874]: I0122 11:53:05.647777 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" Jan 22 11:53:07 crc kubenswrapper[4874]: I0122 11:53:07.637105 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9fa7f48-985a-46b3-a7dc-1b39dfc14243","Type":"ContainerStarted","Data":"bb0384e77d466936e0590d0880f5794b180c8471d296ba55e648b0f5225fc29b"} Jan 22 11:53:07 crc kubenswrapper[4874]: I0122 11:53:07.831404 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb"] Jan 22 11:53:08 crc kubenswrapper[4874]: I0122 11:53:08.642479 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" event={"ID":"e37c03ae-9bcf-4538-9aba-072106273e29","Type":"ContainerStarted","Data":"bc20e22938974036e7e9799cf700a5fa041b9cd37f4b42e9cbc623867237c28f"} Jan 22 11:53:08 crc kubenswrapper[4874]: I0122 11:53:08.643917 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-p9brk" event={"ID":"cd733750-1915-4813-9e44-ec3777ce9c53","Type":"ContainerStarted","Data":"6d63acec90a7b207296bf715fe4ea871ed1bc2eac339709bf80adb3104b5e582"} Jan 22 11:53:08 crc kubenswrapper[4874]: I0122 11:53:08.662326 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-p9brk" podStartSLOduration=9.986926834 podStartE2EDuration="18.662301749s" podCreationTimestamp="2026-01-22 11:52:50 +0000 UTC" firstStartedPulling="2026-01-22 11:52:59.099245859 +0000 UTC m=+752.944316929" lastFinishedPulling="2026-01-22 11:53:07.774620774 +0000 UTC m=+761.619691844" observedRunningTime="2026-01-22 11:53:08.659449091 +0000 UTC m=+762.504520161" watchObservedRunningTime="2026-01-22 11:53:08.662301749 +0000 UTC m=+762.507372819" Jan 22 11:53:11 crc kubenswrapper[4874]: I0122 11:53:11.043759 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-kqrkd" Jan 22 11:53:13 crc kubenswrapper[4874]: I0122 11:53:13.520057 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:53:13 crc kubenswrapper[4874]: I0122 11:53:13.520114 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:53:13 crc kubenswrapper[4874]: I0122 11:53:13.520162 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:53:13 crc kubenswrapper[4874]: I0122 11:53:13.520765 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58a69e8f9170bdd4dd90e6e773cd03089d2a6279398d2b1a2ba4ed87135be13a"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:53:13 crc kubenswrapper[4874]: I0122 11:53:13.520813 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://58a69e8f9170bdd4dd90e6e773cd03089d2a6279398d2b1a2ba4ed87135be13a" gracePeriod=600 Jan 22 11:53:14 crc kubenswrapper[4874]: I0122 11:53:14.684475 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="58a69e8f9170bdd4dd90e6e773cd03089d2a6279398d2b1a2ba4ed87135be13a" exitCode=0 Jan 22 11:53:14 crc kubenswrapper[4874]: I0122 11:53:14.684522 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"58a69e8f9170bdd4dd90e6e773cd03089d2a6279398d2b1a2ba4ed87135be13a"} Jan 22 11:53:14 crc kubenswrapper[4874]: I0122 11:53:14.684568 4874 scope.go:117] "RemoveContainer" containerID="4ef8bf2a0eb4af528f4de7cda955aa92b0de433bdf6d0ebda36103e1834fe2b2" Jan 22 11:53:21 crc kubenswrapper[4874]: E0122 11:53:21.004425 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911" Jan 22 11:53:21 crc kubenswrapper[4874]: E0122 11:53:21.005041 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-operator,Image:registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911,Command:[/usr/bin/cert-manager-operator],Args:[start --v=$(OPERATOR_LOG_LEVEL) --trusted-ca-configmap=$(TRUSTED_CA_CONFIGMAP_NAME) --cloud-credentials-secret=$(CLOUD_CREDENTIALS_SECRET_NAME) --unsupported-addon-features=$(UNSUPPORTED_ADDON_FEATURES)],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cert-manager-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_WEBHOOK,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CA_INJECTOR,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CONTROLLER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ACMESOLVER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9@sha256:ba937fc4b9eee31422914352c11a45b90754ba4fbe490ea45249b90afdc4e0a7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ISTIOCSR,Value:registry.redhat.io/cert-manager/cert-manager-istio-csr-rhel9@sha256:af1ac813b8ee414ef215936f05197bc498bccbd540f3e2a93cb522221ba112bc,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.18.3,ValueFrom:nil,},EnvVar{Name:ISTIOCSR_OPERAND_IMAGE_VERSION,Value:0.14.2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:1.18.0,ValueFrom:nil,},EnvVar{Name:OPERATOR_LOG_LEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:TRUSTED_CA_CONFIGMAP_NAME,Value:,ValueFrom:nil,},EnvVar{Name:CLOUD_CREDENTIALS_SECRET_NAME,Value:,ValueFrom:nil,},EnvVar{Name:UNSUPPORTED_ADDON_FEATURES,Value:,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cert-manager-operator.v1.18.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{33554432 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ql4qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-operator-controller-manager-5446d6888b-zhfmb_cert-manager-operator(e37c03ae-9bcf-4538-9aba-072106273e29): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 11:53:21 crc kubenswrapper[4874]: E0122 11:53:21.006992 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" podUID="e37c03ae-9bcf-4538-9aba-072106273e29" Jan 22 11:53:21 crc kubenswrapper[4874]: E0122 11:53:21.723114 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911\\\"\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" podUID="e37c03ae-9bcf-4538-9aba-072106273e29" Jan 22 11:53:25 crc kubenswrapper[4874]: I0122 11:53:25.746447 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"2554a3567b106d7ded370e89a12bf13dafb3f02d930bcb0aa478a1f4bf2cf32b"} Jan 22 11:53:25 crc kubenswrapper[4874]: I0122 11:53:25.748486 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9fa7f48-985a-46b3-a7dc-1b39dfc14243","Type":"ContainerStarted","Data":"76f1f52f33f85f0ce7add7dbcada81bb14335fa3c949ab0293c2a94b1081484e"} Jan 22 11:53:26 crc kubenswrapper[4874]: I0122 11:53:26.123163 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 11:53:26 crc kubenswrapper[4874]: I0122 11:53:26.171294 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 11:53:27 crc kubenswrapper[4874]: I0122 11:53:27.761348 4874 generic.go:334] "Generic (PLEG): container finished" podID="c9fa7f48-985a-46b3-a7dc-1b39dfc14243" containerID="76f1f52f33f85f0ce7add7dbcada81bb14335fa3c949ab0293c2a94b1081484e" exitCode=0 Jan 22 11:53:27 crc kubenswrapper[4874]: I0122 11:53:27.761452 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9fa7f48-985a-46b3-a7dc-1b39dfc14243","Type":"ContainerDied","Data":"76f1f52f33f85f0ce7add7dbcada81bb14335fa3c949ab0293c2a94b1081484e"} Jan 22 11:53:28 crc kubenswrapper[4874]: I0122 11:53:28.770265 4874 generic.go:334] "Generic (PLEG): container finished" podID="c9fa7f48-985a-46b3-a7dc-1b39dfc14243" containerID="758db578e91bb6515d0e82c635722c1453210d78166d82b004ff9c8910119fbb" exitCode=0 Jan 22 11:53:28 crc kubenswrapper[4874]: I0122 11:53:28.770308 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9fa7f48-985a-46b3-a7dc-1b39dfc14243","Type":"ContainerDied","Data":"758db578e91bb6515d0e82c635722c1453210d78166d82b004ff9c8910119fbb"} Jan 22 11:53:29 crc kubenswrapper[4874]: I0122 11:53:29.779518 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9fa7f48-985a-46b3-a7dc-1b39dfc14243","Type":"ContainerStarted","Data":"b4903bb13070f48c4d782896c6a57a6a3155d42b060fa318be80e908fb0f39c3"} Jan 22 11:53:29 crc kubenswrapper[4874]: I0122 11:53:29.780095 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:29 crc kubenswrapper[4874]: I0122 11:53:29.843827 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=10.705030057 podStartE2EDuration="27.843810046s" podCreationTimestamp="2026-01-22 11:53:02 +0000 UTC" firstStartedPulling="2026-01-22 11:53:07.607980639 +0000 UTC m=+761.453051709" lastFinishedPulling="2026-01-22 11:53:24.746760628 +0000 UTC m=+778.591831698" observedRunningTime="2026-01-22 11:53:29.842827235 +0000 UTC m=+783.687898345" watchObservedRunningTime="2026-01-22 11:53:29.843810046 +0000 UTC m=+783.688881116" Jan 22 11:53:35 crc kubenswrapper[4874]: I0122 11:53:35.825009 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" event={"ID":"e37c03ae-9bcf-4538-9aba-072106273e29","Type":"ContainerStarted","Data":"f0d0c6331a5fb1c370792881f12b84b77bda275fa555f2d0fc6a5cc4fe381538"} Jan 22 11:53:35 crc kubenswrapper[4874]: I0122 11:53:35.853173 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-zhfmb" podStartSLOduration=3.243297462 podStartE2EDuration="30.853153351s" podCreationTimestamp="2026-01-22 11:53:05 +0000 UTC" firstStartedPulling="2026-01-22 11:53:07.83798973 +0000 UTC m=+761.683060800" lastFinishedPulling="2026-01-22 11:53:35.447845609 +0000 UTC m=+789.292916689" observedRunningTime="2026-01-22 11:53:35.852467271 +0000 UTC m=+789.697538351" watchObservedRunningTime="2026-01-22 11:53:35.853153351 +0000 UTC m=+789.698224421" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.863309 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.881518 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.881684 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.884111 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.884281 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.884614 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jmkbp" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.884643 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.918586 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.918641 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.918671 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.918868 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-push\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.918986 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.919055 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.919088 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.919125 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.919151 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.919170 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.919204 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcbzf\" (UniqueName: \"kubernetes.io/projected/1b09a4fd-742d-4043-9068-e0c2cfb31f33-kube-api-access-rcbzf\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:38 crc kubenswrapper[4874]: I0122 11:53:38.919247 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020556 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-push\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020625 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020667 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020695 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020736 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020761 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020784 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020811 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcbzf\" (UniqueName: \"kubernetes.io/projected/1b09a4fd-742d-4043-9068-e0c2cfb31f33-kube-api-access-rcbzf\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020840 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020869 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020905 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.020938 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.022032 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.022533 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.022820 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.022990 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.023017 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.023157 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.028228 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.028881 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.029108 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.029800 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.033226 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-push\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.049292 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcbzf\" (UniqueName: \"kubernetes.io/projected/1b09a4fd-742d-4043-9068-e0c2cfb31f33-kube-api-access-rcbzf\") pod \"service-telemetry-operator-1-build\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.200994 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.670601 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 11:53:39 crc kubenswrapper[4874]: I0122 11:53:39.854736 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"1b09a4fd-742d-4043-9068-e0c2cfb31f33","Type":"ContainerStarted","Data":"10c98d3b8b81f049d9113119ec9418b2eec58e5f22be050a8e70df9f0b424d91"} Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.315796 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cvlzs"] Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.316503 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.318728 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.318861 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.321629 4874 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2cbsm" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.350510 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cvlzs"] Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.444550 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bef508e9-b648-4c62-bc0d-91bf604067ed-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cvlzs\" (UID: \"bef508e9-b648-4c62-bc0d-91bf604067ed\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.444595 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkpx\" (UniqueName: \"kubernetes.io/projected/bef508e9-b648-4c62-bc0d-91bf604067ed-kube-api-access-lxkpx\") pod \"cert-manager-webhook-f4fb5df64-cvlzs\" (UID: \"bef508e9-b648-4c62-bc0d-91bf604067ed\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.545962 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bef508e9-b648-4c62-bc0d-91bf604067ed-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cvlzs\" (UID: \"bef508e9-b648-4c62-bc0d-91bf604067ed\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.546009 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxkpx\" (UniqueName: \"kubernetes.io/projected/bef508e9-b648-4c62-bc0d-91bf604067ed-kube-api-access-lxkpx\") pod \"cert-manager-webhook-f4fb5df64-cvlzs\" (UID: \"bef508e9-b648-4c62-bc0d-91bf604067ed\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.575384 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxkpx\" (UniqueName: \"kubernetes.io/projected/bef508e9-b648-4c62-bc0d-91bf604067ed-kube-api-access-lxkpx\") pod \"cert-manager-webhook-f4fb5df64-cvlzs\" (UID: \"bef508e9-b648-4c62-bc0d-91bf604067ed\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.588473 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bef508e9-b648-4c62-bc0d-91bf604067ed-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cvlzs\" (UID: \"bef508e9-b648-4c62-bc0d-91bf604067ed\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.633275 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.818300 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cvlzs"] Jan 22 11:53:40 crc kubenswrapper[4874]: W0122 11:53:40.823886 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbef508e9_b648_4c62_bc0d_91bf604067ed.slice/crio-cc71356a99702a6207e9fd2dc646169e87ad072ef05c8409f7680ebd1f6455e4 WatchSource:0}: Error finding container cc71356a99702a6207e9fd2dc646169e87ad072ef05c8409f7680ebd1f6455e4: Status 404 returned error can't find the container with id cc71356a99702a6207e9fd2dc646169e87ad072ef05c8409f7680ebd1f6455e4 Jan 22 11:53:40 crc kubenswrapper[4874]: I0122 11:53:40.861195 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" event={"ID":"bef508e9-b648-4c62-bc0d-91bf604067ed","Type":"ContainerStarted","Data":"cc71356a99702a6207e9fd2dc646169e87ad072ef05c8409f7680ebd1f6455e4"} Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.123297 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m"] Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.146358 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.148883 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m"] Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.151676 4874 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-b5dq9" Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.182808 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmh7\" (UniqueName: \"kubernetes.io/projected/b35625a1-4b85-4915-8ac0-97ff11979513-kube-api-access-mfmh7\") pod \"cert-manager-cainjector-855d9ccff4-mnt2m\" (UID: \"b35625a1-4b85-4915-8ac0-97ff11979513\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.182883 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b35625a1-4b85-4915-8ac0-97ff11979513-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-mnt2m\" (UID: \"b35625a1-4b85-4915-8ac0-97ff11979513\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.283657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b35625a1-4b85-4915-8ac0-97ff11979513-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-mnt2m\" (UID: \"b35625a1-4b85-4915-8ac0-97ff11979513\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.283820 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfmh7\" (UniqueName: \"kubernetes.io/projected/b35625a1-4b85-4915-8ac0-97ff11979513-kube-api-access-mfmh7\") pod \"cert-manager-cainjector-855d9ccff4-mnt2m\" (UID: \"b35625a1-4b85-4915-8ac0-97ff11979513\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.316603 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b35625a1-4b85-4915-8ac0-97ff11979513-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-mnt2m\" (UID: \"b35625a1-4b85-4915-8ac0-97ff11979513\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.316747 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfmh7\" (UniqueName: \"kubernetes.io/projected/b35625a1-4b85-4915-8ac0-97ff11979513-kube-api-access-mfmh7\") pod \"cert-manager-cainjector-855d9ccff4-mnt2m\" (UID: \"b35625a1-4b85-4915-8ac0-97ff11979513\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.472104 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" Jan 22 11:53:43 crc kubenswrapper[4874]: I0122 11:53:43.503068 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 11:53:49 crc kubenswrapper[4874]: I0122 11:53:49.290726 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.666793 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-7nhkf"] Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.667765 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-7nhkf" Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.673706 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-7nhkf"] Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.674296 4874 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-496mh" Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.700001 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bl67\" (UniqueName: \"kubernetes.io/projected/50438a7c-3a86-4aeb-a9c6-10ccb68f4593-kube-api-access-7bl67\") pod \"cert-manager-86cb77c54b-7nhkf\" (UID: \"50438a7c-3a86-4aeb-a9c6-10ccb68f4593\") " pod="cert-manager/cert-manager-86cb77c54b-7nhkf" Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.700067 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50438a7c-3a86-4aeb-a9c6-10ccb68f4593-bound-sa-token\") pod \"cert-manager-86cb77c54b-7nhkf\" (UID: \"50438a7c-3a86-4aeb-a9c6-10ccb68f4593\") " pod="cert-manager/cert-manager-86cb77c54b-7nhkf" Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.802188 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bl67\" (UniqueName: \"kubernetes.io/projected/50438a7c-3a86-4aeb-a9c6-10ccb68f4593-kube-api-access-7bl67\") pod \"cert-manager-86cb77c54b-7nhkf\" (UID: \"50438a7c-3a86-4aeb-a9c6-10ccb68f4593\") " pod="cert-manager/cert-manager-86cb77c54b-7nhkf" Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.802303 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50438a7c-3a86-4aeb-a9c6-10ccb68f4593-bound-sa-token\") pod \"cert-manager-86cb77c54b-7nhkf\" (UID: \"50438a7c-3a86-4aeb-a9c6-10ccb68f4593\") " pod="cert-manager/cert-manager-86cb77c54b-7nhkf" Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.833366 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50438a7c-3a86-4aeb-a9c6-10ccb68f4593-bound-sa-token\") pod \"cert-manager-86cb77c54b-7nhkf\" (UID: \"50438a7c-3a86-4aeb-a9c6-10ccb68f4593\") " pod="cert-manager/cert-manager-86cb77c54b-7nhkf" Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.834982 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bl67\" (UniqueName: \"kubernetes.io/projected/50438a7c-3a86-4aeb-a9c6-10ccb68f4593-kube-api-access-7bl67\") pod \"cert-manager-86cb77c54b-7nhkf\" (UID: \"50438a7c-3a86-4aeb-a9c6-10ccb68f4593\") " pod="cert-manager/cert-manager-86cb77c54b-7nhkf" Jan 22 11:53:50 crc kubenswrapper[4874]: I0122 11:53:50.983691 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-7nhkf" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.202775 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m"] Jan 22 11:53:51 crc kubenswrapper[4874]: W0122 11:53:51.207386 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb35625a1_4b85_4915_8ac0_97ff11979513.slice/crio-4a563be5ef7eaf5bb94f189fa58db89c1e68323cebfaad76e8bd9efd70ad74cc WatchSource:0}: Error finding container 4a563be5ef7eaf5bb94f189fa58db89c1e68323cebfaad76e8bd9efd70ad74cc: Status 404 returned error can't find the container with id 4a563be5ef7eaf5bb94f189fa58db89c1e68323cebfaad76e8bd9efd70ad74cc Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.388386 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.404804 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.410994 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.411195 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.411318 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.420780 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.496990 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-7nhkf"] Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513537 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513598 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513633 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-push\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513661 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513683 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513721 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5k6\" (UniqueName: \"kubernetes.io/projected/b6347418-07a3-41af-aea9-1eddb77e64fb-kube-api-access-kb5k6\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513750 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513775 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513809 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513832 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513857 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.513899 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615096 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615140 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-push\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615158 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615174 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615204 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5k6\" (UniqueName: \"kubernetes.io/projected/b6347418-07a3-41af-aea9-1eddb77e64fb-kube-api-access-kb5k6\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615226 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615246 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615269 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615289 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615306 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615337 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615367 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615695 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.615744 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.616013 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.616350 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.616577 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.616899 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.616898 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.617074 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.617179 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.620920 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.620983 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-push\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.633860 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5k6\" (UniqueName: \"kubernetes.io/projected/b6347418-07a3-41af-aea9-1eddb77e64fb-kube-api-access-kb5k6\") pod \"service-telemetry-operator-2-build\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.731083 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.953197 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"1b09a4fd-742d-4043-9068-e0c2cfb31f33","Type":"ContainerStarted","Data":"11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff"} Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.953315 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="1b09a4fd-742d-4043-9068-e0c2cfb31f33" containerName="manage-dockerfile" containerID="cri-o://11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff" gracePeriod=30 Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.954978 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" event={"ID":"b35625a1-4b85-4915-8ac0-97ff11979513","Type":"ContainerStarted","Data":"c670552933f7270daf00e6564c7c223dbc80cdba112b0bbbdbfc9bdbd5471310"} Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.955016 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" event={"ID":"b35625a1-4b85-4915-8ac0-97ff11979513","Type":"ContainerStarted","Data":"4a563be5ef7eaf5bb94f189fa58db89c1e68323cebfaad76e8bd9efd70ad74cc"} Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.959115 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" event={"ID":"bef508e9-b648-4c62-bc0d-91bf604067ed","Type":"ContainerStarted","Data":"5ebbd4a950f408f1e40965d49fcc41103d96f3bbd2f95124a8b83dd5a1611bcc"} Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.959276 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.960412 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-7nhkf" event={"ID":"50438a7c-3a86-4aeb-a9c6-10ccb68f4593","Type":"ContainerStarted","Data":"e1c4b6578f52fc6594e2e874b18303c7e8351947f777186a48184d682c938010"} Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.960448 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-7nhkf" event={"ID":"50438a7c-3a86-4aeb-a9c6-10ccb68f4593","Type":"ContainerStarted","Data":"963fe171f462cde36ba653026c68d7571de2986c471df1abe768561e21a30d8a"} Jan 22 11:53:51 crc kubenswrapper[4874]: I0122 11:53:51.974899 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.033715 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-7nhkf" podStartSLOduration=2.033694993 podStartE2EDuration="2.033694993s" podCreationTimestamp="2026-01-22 11:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:53:52.00868727 +0000 UTC m=+805.853758340" watchObservedRunningTime="2026-01-22 11:53:52.033694993 +0000 UTC m=+805.878766063" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.035284 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" podStartSLOduration=1.8041309829999999 podStartE2EDuration="12.035275622s" podCreationTimestamp="2026-01-22 11:53:40 +0000 UTC" firstStartedPulling="2026-01-22 11:53:40.826090621 +0000 UTC m=+794.671161691" lastFinishedPulling="2026-01-22 11:53:51.05723526 +0000 UTC m=+804.902306330" observedRunningTime="2026-01-22 11:53:52.029621987 +0000 UTC m=+805.874693067" watchObservedRunningTime="2026-01-22 11:53:52.035275622 +0000 UTC m=+805.880346692" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.046568 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-mnt2m" podStartSLOduration=9.04655092 podStartE2EDuration="9.04655092s" podCreationTimestamp="2026-01-22 11:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:53:52.044996583 +0000 UTC m=+805.890067663" watchObservedRunningTime="2026-01-22 11:53:52.04655092 +0000 UTC m=+805.891621990" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.376608 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_1b09a4fd-742d-4043-9068-e0c2cfb31f33/manage-dockerfile/0.log" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.376667 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434124 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildworkdir\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434165 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-blob-cache\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434211 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-system-configs\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434233 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-node-pullsecrets\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434261 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-root\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434282 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcbzf\" (UniqueName: \"kubernetes.io/projected/1b09a4fd-742d-4043-9068-e0c2cfb31f33-kube-api-access-rcbzf\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434303 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-push\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434327 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-ca-bundles\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434377 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-proxy-ca-bundles\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434421 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildcachedir\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434441 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-run\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.434459 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-pull\") pod \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\" (UID: \"1b09a4fd-742d-4043-9068-e0c2cfb31f33\") " Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.435895 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.435940 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.435950 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.435953 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.435736 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.436127 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.436115 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.436241 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.436521 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.440081 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.440206 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b09a4fd-742d-4043-9068-e0c2cfb31f33-kube-api-access-rcbzf" (OuterVolumeSpecName: "kube-api-access-rcbzf") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "kube-api-access-rcbzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.440426 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "1b09a4fd-742d-4043-9068-e0c2cfb31f33" (UID: "1b09a4fd-742d-4043-9068-e0c2cfb31f33"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535575 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535617 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535633 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535645 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535657 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535668 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcbzf\" (UniqueName: \"kubernetes.io/projected/1b09a4fd-742d-4043-9068-e0c2cfb31f33-kube-api-access-rcbzf\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535678 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535690 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535700 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b09a4fd-742d-4043-9068-e0c2cfb31f33-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535710 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1b09a4fd-742d-4043-9068-e0c2cfb31f33-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535720 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1b09a4fd-742d-4043-9068-e0c2cfb31f33-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.535730 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/1b09a4fd-742d-4043-9068-e0c2cfb31f33-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.966853 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b6347418-07a3-41af-aea9-1eddb77e64fb","Type":"ContainerStarted","Data":"07d8d2d70b9416f6aeb9ec8e89a4ab652f04c9798b2aba0722577fe2f16d5b0e"} Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.966903 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b6347418-07a3-41af-aea9-1eddb77e64fb","Type":"ContainerStarted","Data":"bcebca6560efe9e9a2543339a21c60dc4631ab73e752c93dad1062b96446647d"} Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.969646 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_1b09a4fd-742d-4043-9068-e0c2cfb31f33/manage-dockerfile/0.log" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.969699 4874 generic.go:334] "Generic (PLEG): container finished" podID="1b09a4fd-742d-4043-9068-e0c2cfb31f33" containerID="11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff" exitCode=1 Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.969770 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.969773 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"1b09a4fd-742d-4043-9068-e0c2cfb31f33","Type":"ContainerDied","Data":"11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff"} Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.969829 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"1b09a4fd-742d-4043-9068-e0c2cfb31f33","Type":"ContainerDied","Data":"10c98d3b8b81f049d9113119ec9418b2eec58e5f22be050a8e70df9f0b424d91"} Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.969856 4874 scope.go:117] "RemoveContainer" containerID="11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.991281 4874 scope.go:117] "RemoveContainer" containerID="11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff" Jan 22 11:53:52 crc kubenswrapper[4874]: E0122 11:53:52.991729 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff\": container with ID starting with 11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff not found: ID does not exist" containerID="11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff" Jan 22 11:53:52 crc kubenswrapper[4874]: I0122 11:53:52.991764 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff"} err="failed to get container status \"11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff\": rpc error: code = NotFound desc = could not find container \"11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff\": container with ID starting with 11c232a743fd09cd39e768ebb2123d169afbc2c271af710e33ecb1dd42732dff not found: ID does not exist" Jan 22 11:53:53 crc kubenswrapper[4874]: I0122 11:53:53.006943 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 11:53:53 crc kubenswrapper[4874]: I0122 11:53:53.012629 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 11:53:54 crc kubenswrapper[4874]: I0122 11:53:54.727430 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b09a4fd-742d-4043-9068-e0c2cfb31f33" path="/var/lib/kubelet/pods/1b09a4fd-742d-4043-9068-e0c2cfb31f33/volumes" Jan 22 11:54:00 crc kubenswrapper[4874]: I0122 11:54:00.637075 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvlzs" Jan 22 11:54:02 crc kubenswrapper[4874]: I0122 11:54:02.028526 4874 generic.go:334] "Generic (PLEG): container finished" podID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerID="07d8d2d70b9416f6aeb9ec8e89a4ab652f04c9798b2aba0722577fe2f16d5b0e" exitCode=0 Jan 22 11:54:02 crc kubenswrapper[4874]: I0122 11:54:02.028632 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b6347418-07a3-41af-aea9-1eddb77e64fb","Type":"ContainerDied","Data":"07d8d2d70b9416f6aeb9ec8e89a4ab652f04c9798b2aba0722577fe2f16d5b0e"} Jan 22 11:54:03 crc kubenswrapper[4874]: I0122 11:54:03.038351 4874 generic.go:334] "Generic (PLEG): container finished" podID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerID="24440b237869ff1bb9aafcbe4d425a6f81c7f6a2e6e5467fe6067f0986e8b06f" exitCode=0 Jan 22 11:54:03 crc kubenswrapper[4874]: I0122 11:54:03.038452 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b6347418-07a3-41af-aea9-1eddb77e64fb","Type":"ContainerDied","Data":"24440b237869ff1bb9aafcbe4d425a6f81c7f6a2e6e5467fe6067f0986e8b06f"} Jan 22 11:54:03 crc kubenswrapper[4874]: I0122 11:54:03.085587 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_b6347418-07a3-41af-aea9-1eddb77e64fb/manage-dockerfile/0.log" Jan 22 11:54:04 crc kubenswrapper[4874]: I0122 11:54:04.047100 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b6347418-07a3-41af-aea9-1eddb77e64fb","Type":"ContainerStarted","Data":"e7358f32b5ffa397582e9f6239f586687e7f578c71f7955b255e87062a74ca0f"} Jan 22 11:54:04 crc kubenswrapper[4874]: I0122 11:54:04.090097 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=13.09006503 podStartE2EDuration="13.09006503s" podCreationTimestamp="2026-01-22 11:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:54:04.078202763 +0000 UTC m=+817.923273893" watchObservedRunningTime="2026-01-22 11:54:04.09006503 +0000 UTC m=+817.935136180" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.825674 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-98926"] Jan 22 11:54:59 crc kubenswrapper[4874]: E0122 11:54:59.827413 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b09a4fd-742d-4043-9068-e0c2cfb31f33" containerName="manage-dockerfile" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.827489 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b09a4fd-742d-4043-9068-e0c2cfb31f33" containerName="manage-dockerfile" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.827661 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b09a4fd-742d-4043-9068-e0c2cfb31f33" containerName="manage-dockerfile" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.830454 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98926" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.830918 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98926"] Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.854944 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52qk5\" (UniqueName: \"kubernetes.io/projected/4c478499-7674-4b38-b0ea-15b5d9bc4702-kube-api-access-52qk5\") pod \"certified-operators-98926\" (UID: \"4c478499-7674-4b38-b0ea-15b5d9bc4702\") " pod="openshift-marketplace/certified-operators-98926" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.855200 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c478499-7674-4b38-b0ea-15b5d9bc4702-catalog-content\") pod \"certified-operators-98926\" (UID: \"4c478499-7674-4b38-b0ea-15b5d9bc4702\") " pod="openshift-marketplace/certified-operators-98926" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.855324 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c478499-7674-4b38-b0ea-15b5d9bc4702-utilities\") pod \"certified-operators-98926\" (UID: \"4c478499-7674-4b38-b0ea-15b5d9bc4702\") " pod="openshift-marketplace/certified-operators-98926" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.956811 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52qk5\" (UniqueName: \"kubernetes.io/projected/4c478499-7674-4b38-b0ea-15b5d9bc4702-kube-api-access-52qk5\") pod \"certified-operators-98926\" (UID: \"4c478499-7674-4b38-b0ea-15b5d9bc4702\") " pod="openshift-marketplace/certified-operators-98926" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.956877 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c478499-7674-4b38-b0ea-15b5d9bc4702-catalog-content\") pod \"certified-operators-98926\" (UID: \"4c478499-7674-4b38-b0ea-15b5d9bc4702\") " pod="openshift-marketplace/certified-operators-98926" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.956916 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c478499-7674-4b38-b0ea-15b5d9bc4702-utilities\") pod \"certified-operators-98926\" (UID: \"4c478499-7674-4b38-b0ea-15b5d9bc4702\") " pod="openshift-marketplace/certified-operators-98926" Jan 22 11:54:59 crc kubenswrapper[4874]: I0122 11:54:59.992447 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52qk5\" (UniqueName: \"kubernetes.io/projected/4c478499-7674-4b38-b0ea-15b5d9bc4702-kube-api-access-52qk5\") pod \"certified-operators-98926\" (UID: \"4c478499-7674-4b38-b0ea-15b5d9bc4702\") " pod="openshift-marketplace/certified-operators-98926" Jan 22 11:55:00 crc kubenswrapper[4874]: I0122 11:55:00.323680 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c478499-7674-4b38-b0ea-15b5d9bc4702-utilities\") pod \"certified-operators-98926\" (UID: \"4c478499-7674-4b38-b0ea-15b5d9bc4702\") " pod="openshift-marketplace/certified-operators-98926" Jan 22 11:55:00 crc kubenswrapper[4874]: I0122 11:55:00.323704 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c478499-7674-4b38-b0ea-15b5d9bc4702-catalog-content\") pod \"certified-operators-98926\" (UID: \"4c478499-7674-4b38-b0ea-15b5d9bc4702\") " pod="openshift-marketplace/certified-operators-98926" Jan 22 11:55:00 crc kubenswrapper[4874]: I0122 11:55:00.454974 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98926" Jan 22 11:55:00 crc kubenswrapper[4874]: I0122 11:55:00.656695 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98926"] Jan 22 11:55:01 crc kubenswrapper[4874]: I0122 11:55:01.476860 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98926" event={"ID":"4c478499-7674-4b38-b0ea-15b5d9bc4702","Type":"ContainerStarted","Data":"e0f3f6d1f5b747eee16a44d88cba427be28f68d393c3fb831a5b58b3a5f51187"} Jan 22 11:55:02 crc kubenswrapper[4874]: I0122 11:55:02.488154 4874 generic.go:334] "Generic (PLEG): container finished" podID="4c478499-7674-4b38-b0ea-15b5d9bc4702" containerID="1f08a9eb23710c352ac2bbcd443d7a323298901463e74e0229f361507efe2996" exitCode=0 Jan 22 11:55:02 crc kubenswrapper[4874]: I0122 11:55:02.488433 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98926" event={"ID":"4c478499-7674-4b38-b0ea-15b5d9bc4702","Type":"ContainerDied","Data":"1f08a9eb23710c352ac2bbcd443d7a323298901463e74e0229f361507efe2996"} Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.782190 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rgf9f"] Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.784366 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.786868 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgf9f"] Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.807739 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-catalog-content\") pod \"community-operators-rgf9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.807800 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6s5\" (UniqueName: \"kubernetes.io/projected/4d056385-773b-49d8-b721-7d0162438e9f-kube-api-access-rd6s5\") pod \"community-operators-rgf9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.807855 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-utilities\") pod \"community-operators-rgf9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.908825 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd6s5\" (UniqueName: \"kubernetes.io/projected/4d056385-773b-49d8-b721-7d0162438e9f-kube-api-access-rd6s5\") pod \"community-operators-rgf9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.908955 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-utilities\") pod \"community-operators-rgf9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.909028 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-catalog-content\") pod \"community-operators-rgf9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.909600 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-catalog-content\") pod \"community-operators-rgf9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.910602 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-utilities\") pod \"community-operators-rgf9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:03 crc kubenswrapper[4874]: I0122 11:55:03.929199 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd6s5\" (UniqueName: \"kubernetes.io/projected/4d056385-773b-49d8-b721-7d0162438e9f-kube-api-access-rd6s5\") pod \"community-operators-rgf9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:04 crc kubenswrapper[4874]: I0122 11:55:04.104129 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:04 crc kubenswrapper[4874]: I0122 11:55:04.545750 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgf9f"] Jan 22 11:55:04 crc kubenswrapper[4874]: W0122 11:55:04.556487 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d056385_773b_49d8_b721_7d0162438e9f.slice/crio-b7a8bc688a5eaec996a6e4d62822b5030c9d56d63849d9f140695f947f81b1f3 WatchSource:0}: Error finding container b7a8bc688a5eaec996a6e4d62822b5030c9d56d63849d9f140695f947f81b1f3: Status 404 returned error can't find the container with id b7a8bc688a5eaec996a6e4d62822b5030c9d56d63849d9f140695f947f81b1f3 Jan 22 11:55:05 crc kubenswrapper[4874]: I0122 11:55:05.505753 4874 generic.go:334] "Generic (PLEG): container finished" podID="4d056385-773b-49d8-b721-7d0162438e9f" containerID="c028ad03fcd0a54943d191f3957825c84b32372e3465fe517f4fcb2b3968f2a2" exitCode=0 Jan 22 11:55:05 crc kubenswrapper[4874]: I0122 11:55:05.505902 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgf9f" event={"ID":"4d056385-773b-49d8-b721-7d0162438e9f","Type":"ContainerDied","Data":"c028ad03fcd0a54943d191f3957825c84b32372e3465fe517f4fcb2b3968f2a2"} Jan 22 11:55:05 crc kubenswrapper[4874]: I0122 11:55:05.506043 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgf9f" event={"ID":"4d056385-773b-49d8-b721-7d0162438e9f","Type":"ContainerStarted","Data":"b7a8bc688a5eaec996a6e4d62822b5030c9d56d63849d9f140695f947f81b1f3"} Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.175491 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mmh7d"] Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.176943 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.193221 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mmh7d"] Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.359736 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-utilities\") pod \"redhat-operators-mmh7d\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.359805 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74r6\" (UniqueName: \"kubernetes.io/projected/407598a9-9770-4df2-aa40-0b7a2ebb6b74-kube-api-access-q74r6\") pod \"redhat-operators-mmh7d\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.359888 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-catalog-content\") pod \"redhat-operators-mmh7d\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.461115 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-catalog-content\") pod \"redhat-operators-mmh7d\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.461188 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-utilities\") pod \"redhat-operators-mmh7d\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.461217 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74r6\" (UniqueName: \"kubernetes.io/projected/407598a9-9770-4df2-aa40-0b7a2ebb6b74-kube-api-access-q74r6\") pod \"redhat-operators-mmh7d\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.461760 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-catalog-content\") pod \"redhat-operators-mmh7d\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.461760 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-utilities\") pod \"redhat-operators-mmh7d\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.480827 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74r6\" (UniqueName: \"kubernetes.io/projected/407598a9-9770-4df2-aa40-0b7a2ebb6b74-kube-api-access-q74r6\") pod \"redhat-operators-mmh7d\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:06 crc kubenswrapper[4874]: I0122 11:55:06.498708 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:07 crc kubenswrapper[4874]: I0122 11:55:07.378075 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mmh7d"] Jan 22 11:55:07 crc kubenswrapper[4874]: I0122 11:55:07.517544 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmh7d" event={"ID":"407598a9-9770-4df2-aa40-0b7a2ebb6b74","Type":"ContainerStarted","Data":"d5440e0f69fa163e2790d5d166c6aabcddecace2bc92b400850bfe5cdb573090"} Jan 22 11:55:07 crc kubenswrapper[4874]: I0122 11:55:07.519918 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98926" event={"ID":"4c478499-7674-4b38-b0ea-15b5d9bc4702","Type":"ContainerStarted","Data":"3423c31fe5f0d3c779afd5ba1f80b7ff941130a3098df2783d7fa827e40e5745"} Jan 22 11:55:08 crc kubenswrapper[4874]: E0122 11:55:08.158775 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d056385_773b_49d8_b721_7d0162438e9f.slice/crio-conmon-61e621791d40f5cf7de14e9a32d33d52133ebb7e9baf8f265b05b722c24ab767.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:55:08 crc kubenswrapper[4874]: I0122 11:55:08.528005 4874 generic.go:334] "Generic (PLEG): container finished" podID="4c478499-7674-4b38-b0ea-15b5d9bc4702" containerID="3423c31fe5f0d3c779afd5ba1f80b7ff941130a3098df2783d7fa827e40e5745" exitCode=0 Jan 22 11:55:08 crc kubenswrapper[4874]: I0122 11:55:08.528105 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98926" event={"ID":"4c478499-7674-4b38-b0ea-15b5d9bc4702","Type":"ContainerDied","Data":"3423c31fe5f0d3c779afd5ba1f80b7ff941130a3098df2783d7fa827e40e5745"} Jan 22 11:55:08 crc kubenswrapper[4874]: I0122 11:55:08.533285 4874 generic.go:334] "Generic (PLEG): container finished" podID="4d056385-773b-49d8-b721-7d0162438e9f" containerID="61e621791d40f5cf7de14e9a32d33d52133ebb7e9baf8f265b05b722c24ab767" exitCode=0 Jan 22 11:55:08 crc kubenswrapper[4874]: I0122 11:55:08.533430 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgf9f" event={"ID":"4d056385-773b-49d8-b721-7d0162438e9f","Type":"ContainerDied","Data":"61e621791d40f5cf7de14e9a32d33d52133ebb7e9baf8f265b05b722c24ab767"} Jan 22 11:55:08 crc kubenswrapper[4874]: I0122 11:55:08.536168 4874 generic.go:334] "Generic (PLEG): container finished" podID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerID="40c28e2e42abf9261f99365d65326b15740836021a91f4acb26259216ca3e7b1" exitCode=0 Jan 22 11:55:08 crc kubenswrapper[4874]: I0122 11:55:08.536207 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmh7d" event={"ID":"407598a9-9770-4df2-aa40-0b7a2ebb6b74","Type":"ContainerDied","Data":"40c28e2e42abf9261f99365d65326b15740836021a91f4acb26259216ca3e7b1"} Jan 22 11:55:09 crc kubenswrapper[4874]: I0122 11:55:09.543465 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgf9f" event={"ID":"4d056385-773b-49d8-b721-7d0162438e9f","Type":"ContainerStarted","Data":"d491a04b41daea23345a2b813786ba63b9a8a194f6fb063d231c22300029408b"} Jan 22 11:55:09 crc kubenswrapper[4874]: I0122 11:55:09.545758 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmh7d" event={"ID":"407598a9-9770-4df2-aa40-0b7a2ebb6b74","Type":"ContainerStarted","Data":"0bc4b419597059c10171b789a054785cf3bb4929a5925290a6549cdfef194806"} Jan 22 11:55:09 crc kubenswrapper[4874]: I0122 11:55:09.547818 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98926" event={"ID":"4c478499-7674-4b38-b0ea-15b5d9bc4702","Type":"ContainerStarted","Data":"966fb6f463dcf74310bf96d9792bc06193422b6066b0ad13bd2cf39713a58fe7"} Jan 22 11:55:09 crc kubenswrapper[4874]: I0122 11:55:09.576675 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rgf9f" podStartSLOduration=4.738876114 podStartE2EDuration="6.576655681s" podCreationTimestamp="2026-01-22 11:55:03 +0000 UTC" firstStartedPulling="2026-01-22 11:55:07.105561751 +0000 UTC m=+880.950632821" lastFinishedPulling="2026-01-22 11:55:08.943341308 +0000 UTC m=+882.788412388" observedRunningTime="2026-01-22 11:55:09.570118289 +0000 UTC m=+883.415189369" watchObservedRunningTime="2026-01-22 11:55:09.576655681 +0000 UTC m=+883.421726751" Jan 22 11:55:09 crc kubenswrapper[4874]: I0122 11:55:09.612514 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-98926" podStartSLOduration=4.181670771 podStartE2EDuration="10.612494589s" podCreationTimestamp="2026-01-22 11:54:59 +0000 UTC" firstStartedPulling="2026-01-22 11:55:02.490956603 +0000 UTC m=+876.336027673" lastFinishedPulling="2026-01-22 11:55:08.921780421 +0000 UTC m=+882.766851491" observedRunningTime="2026-01-22 11:55:09.589984603 +0000 UTC m=+883.435055673" watchObservedRunningTime="2026-01-22 11:55:09.612494589 +0000 UTC m=+883.457565659" Jan 22 11:55:10 crc kubenswrapper[4874]: I0122 11:55:10.456180 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-98926" Jan 22 11:55:10 crc kubenswrapper[4874]: I0122 11:55:10.456229 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-98926" Jan 22 11:55:10 crc kubenswrapper[4874]: I0122 11:55:10.554278 4874 generic.go:334] "Generic (PLEG): container finished" podID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerID="0bc4b419597059c10171b789a054785cf3bb4929a5925290a6549cdfef194806" exitCode=0 Jan 22 11:55:10 crc kubenswrapper[4874]: I0122 11:55:10.555257 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmh7d" event={"ID":"407598a9-9770-4df2-aa40-0b7a2ebb6b74","Type":"ContainerDied","Data":"0bc4b419597059c10171b789a054785cf3bb4929a5925290a6549cdfef194806"} Jan 22 11:55:11 crc kubenswrapper[4874]: I0122 11:55:11.500915 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-98926" podUID="4c478499-7674-4b38-b0ea-15b5d9bc4702" containerName="registry-server" probeResult="failure" output=< Jan 22 11:55:11 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Jan 22 11:55:11 crc kubenswrapper[4874]: > Jan 22 11:55:12 crc kubenswrapper[4874]: I0122 11:55:12.571333 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmh7d" event={"ID":"407598a9-9770-4df2-aa40-0b7a2ebb6b74","Type":"ContainerStarted","Data":"7502d2cf8a786e85152f56fd1e61f57c84773a339414e06e01565cfd0b41ce53"} Jan 22 11:55:12 crc kubenswrapper[4874]: I0122 11:55:12.596831 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mmh7d" podStartSLOduration=4.162854338 podStartE2EDuration="6.596804428s" podCreationTimestamp="2026-01-22 11:55:06 +0000 UTC" firstStartedPulling="2026-01-22 11:55:08.537902152 +0000 UTC m=+882.382973222" lastFinishedPulling="2026-01-22 11:55:10.971852242 +0000 UTC m=+884.816923312" observedRunningTime="2026-01-22 11:55:12.589667346 +0000 UTC m=+886.434738466" watchObservedRunningTime="2026-01-22 11:55:12.596804428 +0000 UTC m=+886.441875508" Jan 22 11:55:14 crc kubenswrapper[4874]: I0122 11:55:14.104361 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:14 crc kubenswrapper[4874]: I0122 11:55:14.104744 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:15 crc kubenswrapper[4874]: I0122 11:55:14.144449 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:15 crc kubenswrapper[4874]: I0122 11:55:14.673690 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:15 crc kubenswrapper[4874]: I0122 11:55:15.570261 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgf9f"] Jan 22 11:55:16 crc kubenswrapper[4874]: I0122 11:55:16.499772 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:16 crc kubenswrapper[4874]: I0122 11:55:16.499838 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:16 crc kubenswrapper[4874]: I0122 11:55:16.617440 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rgf9f" podUID="4d056385-773b-49d8-b721-7d0162438e9f" containerName="registry-server" containerID="cri-o://d491a04b41daea23345a2b813786ba63b9a8a194f6fb063d231c22300029408b" gracePeriod=2 Jan 22 11:55:17 crc kubenswrapper[4874]: I0122 11:55:17.562337 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mmh7d" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerName="registry-server" probeResult="failure" output=< Jan 22 11:55:17 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Jan 22 11:55:17 crc kubenswrapper[4874]: > Jan 22 11:55:20 crc kubenswrapper[4874]: I0122 11:55:20.509359 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-98926" Jan 22 11:55:20 crc kubenswrapper[4874]: I0122 11:55:20.554720 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-98926" Jan 22 11:55:20 crc kubenswrapper[4874]: I0122 11:55:20.690729 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98926"] Jan 22 11:55:20 crc kubenswrapper[4874]: I0122 11:55:20.741186 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbdq6"] Jan 22 11:55:20 crc kubenswrapper[4874]: I0122 11:55:20.741472 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bbdq6" podUID="b0a069d7-0600-4490-82a9-2656913f35b7" containerName="registry-server" containerID="cri-o://56ec71b649678b9d06d64da2f108ecc5d7baa2ab0550b12655288c4bbfbcdef7" gracePeriod=2 Jan 22 11:55:22 crc kubenswrapper[4874]: I0122 11:55:22.472101 4874 generic.go:334] "Generic (PLEG): container finished" podID="4d056385-773b-49d8-b721-7d0162438e9f" containerID="d491a04b41daea23345a2b813786ba63b9a8a194f6fb063d231c22300029408b" exitCode=0 Jan 22 11:55:22 crc kubenswrapper[4874]: I0122 11:55:22.472198 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgf9f" event={"ID":"4d056385-773b-49d8-b721-7d0162438e9f","Type":"ContainerDied","Data":"d491a04b41daea23345a2b813786ba63b9a8a194f6fb063d231c22300029408b"} Jan 22 11:55:22 crc kubenswrapper[4874]: I0122 11:55:22.950033 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.090253 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-utilities\") pod \"4d056385-773b-49d8-b721-7d0162438e9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.090299 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd6s5\" (UniqueName: \"kubernetes.io/projected/4d056385-773b-49d8-b721-7d0162438e9f-kube-api-access-rd6s5\") pod \"4d056385-773b-49d8-b721-7d0162438e9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.090411 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-catalog-content\") pod \"4d056385-773b-49d8-b721-7d0162438e9f\" (UID: \"4d056385-773b-49d8-b721-7d0162438e9f\") " Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.091179 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-utilities" (OuterVolumeSpecName: "utilities") pod "4d056385-773b-49d8-b721-7d0162438e9f" (UID: "4d056385-773b-49d8-b721-7d0162438e9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.104634 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d056385-773b-49d8-b721-7d0162438e9f-kube-api-access-rd6s5" (OuterVolumeSpecName: "kube-api-access-rd6s5") pod "4d056385-773b-49d8-b721-7d0162438e9f" (UID: "4d056385-773b-49d8-b721-7d0162438e9f"). InnerVolumeSpecName "kube-api-access-rd6s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.192340 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.192385 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd6s5\" (UniqueName: \"kubernetes.io/projected/4d056385-773b-49d8-b721-7d0162438e9f-kube-api-access-rd6s5\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.483596 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgf9f" Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.483597 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgf9f" event={"ID":"4d056385-773b-49d8-b721-7d0162438e9f","Type":"ContainerDied","Data":"b7a8bc688a5eaec996a6e4d62822b5030c9d56d63849d9f140695f947f81b1f3"} Jan 22 11:55:23 crc kubenswrapper[4874]: I0122 11:55:23.484078 4874 scope.go:117] "RemoveContainer" containerID="d491a04b41daea23345a2b813786ba63b9a8a194f6fb063d231c22300029408b" Jan 22 11:55:24 crc kubenswrapper[4874]: I0122 11:55:24.111937 4874 scope.go:117] "RemoveContainer" containerID="61e621791d40f5cf7de14e9a32d33d52133ebb7e9baf8f265b05b722c24ab767" Jan 22 11:55:24 crc kubenswrapper[4874]: I0122 11:55:24.134634 4874 scope.go:117] "RemoveContainer" containerID="c028ad03fcd0a54943d191f3957825c84b32372e3465fe517f4fcb2b3968f2a2" Jan 22 11:55:24 crc kubenswrapper[4874]: I0122 11:55:24.381531 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d056385-773b-49d8-b721-7d0162438e9f" (UID: "4d056385-773b-49d8-b721-7d0162438e9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:24 crc kubenswrapper[4874]: I0122 11:55:24.411365 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d056385-773b-49d8-b721-7d0162438e9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:24 crc kubenswrapper[4874]: I0122 11:55:24.429079 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgf9f"] Jan 22 11:55:24 crc kubenswrapper[4874]: I0122 11:55:24.434333 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rgf9f"] Jan 22 11:55:24 crc kubenswrapper[4874]: I0122 11:55:24.492122 4874 generic.go:334] "Generic (PLEG): container finished" podID="b0a069d7-0600-4490-82a9-2656913f35b7" containerID="56ec71b649678b9d06d64da2f108ecc5d7baa2ab0550b12655288c4bbfbcdef7" exitCode=0 Jan 22 11:55:24 crc kubenswrapper[4874]: I0122 11:55:24.492167 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbdq6" event={"ID":"b0a069d7-0600-4490-82a9-2656913f35b7","Type":"ContainerDied","Data":"56ec71b649678b9d06d64da2f108ecc5d7baa2ab0550b12655288c4bbfbcdef7"} Jan 22 11:55:24 crc kubenswrapper[4874]: I0122 11:55:24.726541 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d056385-773b-49d8-b721-7d0162438e9f" path="/var/lib/kubelet/pods/4d056385-773b-49d8-b721-7d0162438e9f/volumes" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.119111 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.221487 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlqxz\" (UniqueName: \"kubernetes.io/projected/b0a069d7-0600-4490-82a9-2656913f35b7-kube-api-access-jlqxz\") pod \"b0a069d7-0600-4490-82a9-2656913f35b7\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.221561 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-utilities\") pod \"b0a069d7-0600-4490-82a9-2656913f35b7\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.221594 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-catalog-content\") pod \"b0a069d7-0600-4490-82a9-2656913f35b7\" (UID: \"b0a069d7-0600-4490-82a9-2656913f35b7\") " Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.222504 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-utilities" (OuterVolumeSpecName: "utilities") pod "b0a069d7-0600-4490-82a9-2656913f35b7" (UID: "b0a069d7-0600-4490-82a9-2656913f35b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.236120 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a069d7-0600-4490-82a9-2656913f35b7-kube-api-access-jlqxz" (OuterVolumeSpecName: "kube-api-access-jlqxz") pod "b0a069d7-0600-4490-82a9-2656913f35b7" (UID: "b0a069d7-0600-4490-82a9-2656913f35b7"). InnerVolumeSpecName "kube-api-access-jlqxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.277138 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0a069d7-0600-4490-82a9-2656913f35b7" (UID: "b0a069d7-0600-4490-82a9-2656913f35b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.322997 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.323033 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlqxz\" (UniqueName: \"kubernetes.io/projected/b0a069d7-0600-4490-82a9-2656913f35b7-kube-api-access-jlqxz\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.323045 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0a069d7-0600-4490-82a9-2656913f35b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.498357 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbdq6" event={"ID":"b0a069d7-0600-4490-82a9-2656913f35b7","Type":"ContainerDied","Data":"77b7f983a0caf174b2a1ff9585d546c476923723f895d8d2ccf6e1a80d72b6c1"} Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.498416 4874 scope.go:117] "RemoveContainer" containerID="56ec71b649678b9d06d64da2f108ecc5d7baa2ab0550b12655288c4bbfbcdef7" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.498471 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbdq6" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.515128 4874 scope.go:117] "RemoveContainer" containerID="1ca6b3f9986d9290ed89e908e86e7d2e92aa24a906749df6d2ac85d44a6ba2a1" Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.538474 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbdq6"] Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.543174 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bbdq6"] Jan 22 11:55:25 crc kubenswrapper[4874]: I0122 11:55:25.543569 4874 scope.go:117] "RemoveContainer" containerID="2136577bcdcbdcffa2ef9ace83cb7011df13fa4d06b362c2ac9b0ca6f8650b12" Jan 22 11:55:26 crc kubenswrapper[4874]: I0122 11:55:26.582275 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:26 crc kubenswrapper[4874]: I0122 11:55:26.627457 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:26 crc kubenswrapper[4874]: I0122 11:55:26.737908 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a069d7-0600-4490-82a9-2656913f35b7" path="/var/lib/kubelet/pods/b0a069d7-0600-4490-82a9-2656913f35b7/volumes" Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.342017 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mmh7d"] Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.342614 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mmh7d" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerName="registry-server" containerID="cri-o://7502d2cf8a786e85152f56fd1e61f57c84773a339414e06e01565cfd0b41ce53" gracePeriod=2 Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.527326 4874 generic.go:334] "Generic (PLEG): container finished" podID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerID="7502d2cf8a786e85152f56fd1e61f57c84773a339414e06e01565cfd0b41ce53" exitCode=0 Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.527366 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmh7d" event={"ID":"407598a9-9770-4df2-aa40-0b7a2ebb6b74","Type":"ContainerDied","Data":"7502d2cf8a786e85152f56fd1e61f57c84773a339414e06e01565cfd0b41ce53"} Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.705799 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.885749 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-catalog-content\") pod \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.885792 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q74r6\" (UniqueName: \"kubernetes.io/projected/407598a9-9770-4df2-aa40-0b7a2ebb6b74-kube-api-access-q74r6\") pod \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.885883 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-utilities\") pod \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\" (UID: \"407598a9-9770-4df2-aa40-0b7a2ebb6b74\") " Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.886632 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-utilities" (OuterVolumeSpecName: "utilities") pod "407598a9-9770-4df2-aa40-0b7a2ebb6b74" (UID: "407598a9-9770-4df2-aa40-0b7a2ebb6b74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.903076 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407598a9-9770-4df2-aa40-0b7a2ebb6b74-kube-api-access-q74r6" (OuterVolumeSpecName: "kube-api-access-q74r6") pod "407598a9-9770-4df2-aa40-0b7a2ebb6b74" (UID: "407598a9-9770-4df2-aa40-0b7a2ebb6b74"). InnerVolumeSpecName "kube-api-access-q74r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.987748 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q74r6\" (UniqueName: \"kubernetes.io/projected/407598a9-9770-4df2-aa40-0b7a2ebb6b74-kube-api-access-q74r6\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.987785 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:29 crc kubenswrapper[4874]: I0122 11:55:29.998278 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "407598a9-9770-4df2-aa40-0b7a2ebb6b74" (UID: "407598a9-9770-4df2-aa40-0b7a2ebb6b74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:30 crc kubenswrapper[4874]: I0122 11:55:30.089153 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407598a9-9770-4df2-aa40-0b7a2ebb6b74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:30 crc kubenswrapper[4874]: I0122 11:55:30.536072 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mmh7d" event={"ID":"407598a9-9770-4df2-aa40-0b7a2ebb6b74","Type":"ContainerDied","Data":"d5440e0f69fa163e2790d5d166c6aabcddecace2bc92b400850bfe5cdb573090"} Jan 22 11:55:30 crc kubenswrapper[4874]: I0122 11:55:30.536126 4874 scope.go:117] "RemoveContainer" containerID="7502d2cf8a786e85152f56fd1e61f57c84773a339414e06e01565cfd0b41ce53" Jan 22 11:55:30 crc kubenswrapper[4874]: I0122 11:55:30.536129 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mmh7d" Jan 22 11:55:30 crc kubenswrapper[4874]: I0122 11:55:30.557360 4874 scope.go:117] "RemoveContainer" containerID="0bc4b419597059c10171b789a054785cf3bb4929a5925290a6549cdfef194806" Jan 22 11:55:30 crc kubenswrapper[4874]: I0122 11:55:30.571729 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mmh7d"] Jan 22 11:55:30 crc kubenswrapper[4874]: I0122 11:55:30.571792 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mmh7d"] Jan 22 11:55:30 crc kubenswrapper[4874]: I0122 11:55:30.603374 4874 scope.go:117] "RemoveContainer" containerID="40c28e2e42abf9261f99365d65326b15740836021a91f4acb26259216ca3e7b1" Jan 22 11:55:30 crc kubenswrapper[4874]: I0122 11:55:30.736116 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" path="/var/lib/kubelet/pods/407598a9-9770-4df2-aa40-0b7a2ebb6b74/volumes" Jan 22 11:55:43 crc kubenswrapper[4874]: I0122 11:55:43.521158 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:55:43 crc kubenswrapper[4874]: I0122 11:55:43.521966 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:55:50 crc kubenswrapper[4874]: I0122 11:55:50.686428 4874 generic.go:334] "Generic (PLEG): container finished" podID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerID="e7358f32b5ffa397582e9f6239f586687e7f578c71f7955b255e87062a74ca0f" exitCode=0 Jan 22 11:55:50 crc kubenswrapper[4874]: I0122 11:55:50.687096 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b6347418-07a3-41af-aea9-1eddb77e64fb","Type":"ContainerDied","Data":"e7358f32b5ffa397582e9f6239f586687e7f578c71f7955b255e87062a74ca0f"} Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.021281 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.186876 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-pull\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.186921 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-buildcachedir\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.186955 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-system-configs\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.186981 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-build-blob-cache\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187031 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-proxy-ca-bundles\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187065 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb5k6\" (UniqueName: \"kubernetes.io/projected/b6347418-07a3-41af-aea9-1eddb77e64fb-kube-api-access-kb5k6\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187086 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187120 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-run\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187148 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-push\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187169 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-ca-bundles\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187204 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-root\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187261 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-node-pullsecrets\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187300 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-buildworkdir\") pod \"b6347418-07a3-41af-aea9-1eddb77e64fb\" (UID: \"b6347418-07a3-41af-aea9-1eddb77e64fb\") " Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187521 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187769 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.187802 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6347418-07a3-41af-aea9-1eddb77e64fb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.188234 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.188298 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.188564 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.188848 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.195688 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6347418-07a3-41af-aea9-1eddb77e64fb-kube-api-access-kb5k6" (OuterVolumeSpecName: "kube-api-access-kb5k6") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "kube-api-access-kb5k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.196169 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.196442 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.232899 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.288993 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.289037 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.289055 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.289071 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.289088 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/b6347418-07a3-41af-aea9-1eddb77e64fb-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.289105 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.289121 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6347418-07a3-41af-aea9-1eddb77e64fb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.289136 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb5k6\" (UniqueName: \"kubernetes.io/projected/b6347418-07a3-41af-aea9-1eddb77e64fb-kube-api-access-kb5k6\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.401275 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.492659 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.706226 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b6347418-07a3-41af-aea9-1eddb77e64fb","Type":"ContainerDied","Data":"bcebca6560efe9e9a2543339a21c60dc4631ab73e752c93dad1062b96446647d"} Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.706267 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcebca6560efe9e9a2543339a21c60dc4631ab73e752c93dad1062b96446647d" Jan 22 11:55:52 crc kubenswrapper[4874]: I0122 11:55:52.706342 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 11:55:54 crc kubenswrapper[4874]: I0122 11:55:54.917505 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b6347418-07a3-41af-aea9-1eddb77e64fb" (UID: "b6347418-07a3-41af-aea9-1eddb77e64fb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:55:54 crc kubenswrapper[4874]: I0122 11:55:54.936262 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b6347418-07a3-41af-aea9-1eddb77e64fb-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065017 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065613 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerName="extract-content" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065631 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerName="extract-content" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065647 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d056385-773b-49d8-b721-7d0162438e9f" containerName="registry-server" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065655 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d056385-773b-49d8-b721-7d0162438e9f" containerName="registry-server" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065667 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d056385-773b-49d8-b721-7d0162438e9f" containerName="extract-utilities" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065676 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d056385-773b-49d8-b721-7d0162438e9f" containerName="extract-utilities" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065689 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerName="registry-server" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065698 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerName="registry-server" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065709 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerName="extract-utilities" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065717 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerName="extract-utilities" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065729 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a069d7-0600-4490-82a9-2656913f35b7" containerName="extract-content" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065738 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a069d7-0600-4490-82a9-2656913f35b7" containerName="extract-content" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065747 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a069d7-0600-4490-82a9-2656913f35b7" containerName="registry-server" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065756 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a069d7-0600-4490-82a9-2656913f35b7" containerName="registry-server" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065768 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerName="manage-dockerfile" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065776 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerName="manage-dockerfile" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065786 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerName="docker-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065794 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerName="docker-build" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065805 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerName="git-clone" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065813 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerName="git-clone" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065826 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d056385-773b-49d8-b721-7d0162438e9f" containerName="extract-content" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065834 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d056385-773b-49d8-b721-7d0162438e9f" containerName="extract-content" Jan 22 11:55:57 crc kubenswrapper[4874]: E0122 11:55:57.065847 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a069d7-0600-4490-82a9-2656913f35b7" containerName="extract-utilities" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065855 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a069d7-0600-4490-82a9-2656913f35b7" containerName="extract-utilities" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065978 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d056385-773b-49d8-b721-7d0162438e9f" containerName="registry-server" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.065991 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6347418-07a3-41af-aea9-1eddb77e64fb" containerName="docker-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.066008 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="407598a9-9770-4df2-aa40-0b7a2ebb6b74" containerName="registry-server" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.066019 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a069d7-0600-4490-82a9-2656913f35b7" containerName="registry-server" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.066812 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.069266 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.069529 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jmkbp" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.069558 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.070421 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079464 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079504 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079522 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjlq\" (UniqueName: \"kubernetes.io/projected/25c1a578-3994-43bb-abfb-5dcfbac56652-kube-api-access-5sjlq\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079550 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079642 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079684 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079719 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-push\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079752 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079776 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079904 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.079984 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.080019 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.081843 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180351 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180419 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180462 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180481 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sjlq\" (UniqueName: \"kubernetes.io/projected/25c1a578-3994-43bb-abfb-5dcfbac56652-kube-api-access-5sjlq\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180499 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180518 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180548 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180574 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180599 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-push\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180644 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180664 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.180707 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.181262 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.181703 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.181883 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.181894 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.181926 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.182229 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.182240 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.182693 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.182939 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.186104 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.189937 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-push\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.199093 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sjlq\" (UniqueName: \"kubernetes.io/projected/25c1a578-3994-43bb-abfb-5dcfbac56652-kube-api-access-5sjlq\") pod \"smart-gateway-operator-1-build\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.390968 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:55:57 crc kubenswrapper[4874]: I0122 11:55:57.876514 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 11:55:58 crc kubenswrapper[4874]: I0122 11:55:58.779895 4874 generic.go:334] "Generic (PLEG): container finished" podID="25c1a578-3994-43bb-abfb-5dcfbac56652" containerID="e26549d52ce98cf01bc9efdf9791b993b46a6268920f6328a10f303f52ec5295" exitCode=0 Jan 22 11:55:58 crc kubenswrapper[4874]: I0122 11:55:58.779953 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"25c1a578-3994-43bb-abfb-5dcfbac56652","Type":"ContainerDied","Data":"e26549d52ce98cf01bc9efdf9791b993b46a6268920f6328a10f303f52ec5295"} Jan 22 11:55:58 crc kubenswrapper[4874]: I0122 11:55:58.780006 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"25c1a578-3994-43bb-abfb-5dcfbac56652","Type":"ContainerStarted","Data":"5133f382caa3ada8209b89766e9dbab4bde7b156a4c6c26bc6d027ea0f1922b9"} Jan 22 11:55:59 crc kubenswrapper[4874]: I0122 11:55:59.794585 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"25c1a578-3994-43bb-abfb-5dcfbac56652","Type":"ContainerStarted","Data":"919ff0b784a44bdf0e65017895fbdc7e586d036a33053e72fca76314755ed735"} Jan 22 11:56:08 crc kubenswrapper[4874]: I0122 11:56:08.046334 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=11.046306004 podStartE2EDuration="11.046306004s" podCreationTimestamp="2026-01-22 11:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:55:59.84067771 +0000 UTC m=+933.685748830" watchObservedRunningTime="2026-01-22 11:56:08.046306004 +0000 UTC m=+941.891377114" Jan 22 11:56:08 crc kubenswrapper[4874]: I0122 11:56:08.055745 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 11:56:08 crc kubenswrapper[4874]: I0122 11:56:08.056106 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="25c1a578-3994-43bb-abfb-5dcfbac56652" containerName="docker-build" containerID="cri-o://919ff0b784a44bdf0e65017895fbdc7e586d036a33053e72fca76314755ed735" gracePeriod=30 Jan 22 11:56:08 crc kubenswrapper[4874]: I0122 11:56:08.859251 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_25c1a578-3994-43bb-abfb-5dcfbac56652/docker-build/0.log" Jan 22 11:56:08 crc kubenswrapper[4874]: I0122 11:56:08.860460 4874 generic.go:334] "Generic (PLEG): container finished" podID="25c1a578-3994-43bb-abfb-5dcfbac56652" containerID="919ff0b784a44bdf0e65017895fbdc7e586d036a33053e72fca76314755ed735" exitCode=1 Jan 22 11:56:08 crc kubenswrapper[4874]: I0122 11:56:08.860507 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"25c1a578-3994-43bb-abfb-5dcfbac56652","Type":"ContainerDied","Data":"919ff0b784a44bdf0e65017895fbdc7e586d036a33053e72fca76314755ed735"} Jan 22 11:56:08 crc kubenswrapper[4874]: I0122 11:56:08.968860 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_25c1a578-3994-43bb-abfb-5dcfbac56652/docker-build/0.log" Jan 22 11:56:08 crc kubenswrapper[4874]: I0122 11:56:08.969238 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.149511 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-push\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150550 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-proxy-ca-bundles\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150584 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-ca-bundles\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150661 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-root\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150736 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-pull\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150798 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-node-pullsecrets\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150821 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-system-configs\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150846 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-buildcachedir\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150878 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-run\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150916 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-buildworkdir\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150945 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sjlq\" (UniqueName: \"kubernetes.io/projected/25c1a578-3994-43bb-abfb-5dcfbac56652-kube-api-access-5sjlq\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.150990 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-build-blob-cache\") pod \"25c1a578-3994-43bb-abfb-5dcfbac56652\" (UID: \"25c1a578-3994-43bb-abfb-5dcfbac56652\") " Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.151038 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.151124 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.151569 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.151595 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/25c1a578-3994-43bb-abfb-5dcfbac56652-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.151754 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.151786 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.152672 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.153237 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.153509 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.155932 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.157101 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c1a578-3994-43bb-abfb-5dcfbac56652-kube-api-access-5sjlq" (OuterVolumeSpecName: "kube-api-access-5sjlq") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "kube-api-access-5sjlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.157778 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.252900 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.252952 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.252973 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.252991 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.253010 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sjlq\" (UniqueName: \"kubernetes.io/projected/25c1a578-3994-43bb-abfb-5dcfbac56652-kube-api-access-5sjlq\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.253028 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/25c1a578-3994-43bb-abfb-5dcfbac56652-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.253045 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.253062 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25c1a578-3994-43bb-abfb-5dcfbac56652-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.347729 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.354735 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.659924 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 22 11:56:09 crc kubenswrapper[4874]: E0122 11:56:09.660228 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c1a578-3994-43bb-abfb-5dcfbac56652" containerName="manage-dockerfile" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.660252 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c1a578-3994-43bb-abfb-5dcfbac56652" containerName="manage-dockerfile" Jan 22 11:56:09 crc kubenswrapper[4874]: E0122 11:56:09.660271 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c1a578-3994-43bb-abfb-5dcfbac56652" containerName="docker-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.660281 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c1a578-3994-43bb-abfb-5dcfbac56652" containerName="docker-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.660496 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c1a578-3994-43bb-abfb-5dcfbac56652" containerName="docker-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.661886 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.669726 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.669746 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.669952 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.676343 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.694141 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "25c1a578-3994-43bb-abfb-5dcfbac56652" (UID: "25c1a578-3994-43bb-abfb-5dcfbac56652"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760184 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760305 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760368 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760466 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760498 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760587 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760641 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-push\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760725 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760778 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pz6p\" (UniqueName: \"kubernetes.io/projected/e36b7db7-fd44-4552-8710-adb858c931c9-kube-api-access-2pz6p\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760818 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760851 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760910 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.760999 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/25c1a578-3994-43bb-abfb-5dcfbac56652-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.862526 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.862631 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pz6p\" (UniqueName: \"kubernetes.io/projected/e36b7db7-fd44-4552-8710-adb858c931c9-kube-api-access-2pz6p\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.862689 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.862736 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.862811 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.862892 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.862980 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.863049 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.863095 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.863136 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.863286 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.863357 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-push\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.863573 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.863611 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.863997 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.864574 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.865075 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.865394 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.865926 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.866114 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.866735 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.869496 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.873325 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_25c1a578-3994-43bb-abfb-5dcfbac56652/docker-build/0.log" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.873994 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"25c1a578-3994-43bb-abfb-5dcfbac56652","Type":"ContainerDied","Data":"5133f382caa3ada8209b89766e9dbab4bde7b156a4c6c26bc6d027ea0f1922b9"} Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.874071 4874 scope.go:117] "RemoveContainer" containerID="919ff0b784a44bdf0e65017895fbdc7e586d036a33053e72fca76314755ed735" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.874183 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.883946 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-push\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.888928 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pz6p\" (UniqueName: \"kubernetes.io/projected/e36b7db7-fd44-4552-8710-adb858c931c9-kube-api-access-2pz6p\") pod \"smart-gateway-operator-2-build\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.952924 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.964274 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 11:56:09 crc kubenswrapper[4874]: I0122 11:56:09.992185 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:56:10 crc kubenswrapper[4874]: I0122 11:56:10.004721 4874 scope.go:117] "RemoveContainer" containerID="e26549d52ce98cf01bc9efdf9791b993b46a6268920f6328a10f303f52ec5295" Jan 22 11:56:10 crc kubenswrapper[4874]: I0122 11:56:10.502526 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 22 11:56:10 crc kubenswrapper[4874]: W0122 11:56:10.511285 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode36b7db7_fd44_4552_8710_adb858c931c9.slice/crio-a455ec88ef39edfe840d2acca84b9a3be2dc686080708adaf4c9e9cbdfea499a WatchSource:0}: Error finding container a455ec88ef39edfe840d2acca84b9a3be2dc686080708adaf4c9e9cbdfea499a: Status 404 returned error can't find the container with id a455ec88ef39edfe840d2acca84b9a3be2dc686080708adaf4c9e9cbdfea499a Jan 22 11:56:10 crc kubenswrapper[4874]: I0122 11:56:10.726474 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c1a578-3994-43bb-abfb-5dcfbac56652" path="/var/lib/kubelet/pods/25c1a578-3994-43bb-abfb-5dcfbac56652/volumes" Jan 22 11:56:10 crc kubenswrapper[4874]: I0122 11:56:10.884349 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"e36b7db7-fd44-4552-8710-adb858c931c9","Type":"ContainerStarted","Data":"a455ec88ef39edfe840d2acca84b9a3be2dc686080708adaf4c9e9cbdfea499a"} Jan 22 11:56:11 crc kubenswrapper[4874]: I0122 11:56:11.895124 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"e36b7db7-fd44-4552-8710-adb858c931c9","Type":"ContainerStarted","Data":"28499c0b98f716958469ca68853b5a713dc17068a8f31f5c05116dd3151a40a9"} Jan 22 11:56:12 crc kubenswrapper[4874]: I0122 11:56:12.903663 4874 generic.go:334] "Generic (PLEG): container finished" podID="e36b7db7-fd44-4552-8710-adb858c931c9" containerID="28499c0b98f716958469ca68853b5a713dc17068a8f31f5c05116dd3151a40a9" exitCode=0 Jan 22 11:56:12 crc kubenswrapper[4874]: I0122 11:56:12.904322 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"e36b7db7-fd44-4552-8710-adb858c931c9","Type":"ContainerDied","Data":"28499c0b98f716958469ca68853b5a713dc17068a8f31f5c05116dd3151a40a9"} Jan 22 11:56:13 crc kubenswrapper[4874]: I0122 11:56:13.520684 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:56:13 crc kubenswrapper[4874]: I0122 11:56:13.520742 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:56:13 crc kubenswrapper[4874]: I0122 11:56:13.912922 4874 generic.go:334] "Generic (PLEG): container finished" podID="e36b7db7-fd44-4552-8710-adb858c931c9" containerID="5bc9027fb3ee25faa268acd977fc54b0f5fb85130c768e0e084e4cd0a17cda10" exitCode=0 Jan 22 11:56:13 crc kubenswrapper[4874]: I0122 11:56:13.912960 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"e36b7db7-fd44-4552-8710-adb858c931c9","Type":"ContainerDied","Data":"5bc9027fb3ee25faa268acd977fc54b0f5fb85130c768e0e084e4cd0a17cda10"} Jan 22 11:56:13 crc kubenswrapper[4874]: I0122 11:56:13.959875 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_e36b7db7-fd44-4552-8710-adb858c931c9/manage-dockerfile/0.log" Jan 22 11:56:14 crc kubenswrapper[4874]: I0122 11:56:14.923687 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"e36b7db7-fd44-4552-8710-adb858c931c9","Type":"ContainerStarted","Data":"f373e74b504c6b60d88ff77f379cfc029b713c333e26f43f4c2808ee66deffc4"} Jan 22 11:56:14 crc kubenswrapper[4874]: I0122 11:56:14.960314 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.9602894410000005 podStartE2EDuration="5.960289441s" podCreationTimestamp="2026-01-22 11:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:56:14.956150102 +0000 UTC m=+948.801221222" watchObservedRunningTime="2026-01-22 11:56:14.960289441 +0000 UTC m=+948.805360531" Jan 22 11:56:43 crc kubenswrapper[4874]: I0122 11:56:43.520252 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:56:43 crc kubenswrapper[4874]: I0122 11:56:43.520838 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:56:43 crc kubenswrapper[4874]: I0122 11:56:43.520892 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 11:56:43 crc kubenswrapper[4874]: I0122 11:56:43.521604 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2554a3567b106d7ded370e89a12bf13dafb3f02d930bcb0aa478a1f4bf2cf32b"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:56:43 crc kubenswrapper[4874]: I0122 11:56:43.521675 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://2554a3567b106d7ded370e89a12bf13dafb3f02d930bcb0aa478a1f4bf2cf32b" gracePeriod=600 Jan 22 11:56:44 crc kubenswrapper[4874]: I0122 11:56:44.125008 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="2554a3567b106d7ded370e89a12bf13dafb3f02d930bcb0aa478a1f4bf2cf32b" exitCode=0 Jan 22 11:56:44 crc kubenswrapper[4874]: I0122 11:56:44.125085 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"2554a3567b106d7ded370e89a12bf13dafb3f02d930bcb0aa478a1f4bf2cf32b"} Jan 22 11:56:44 crc kubenswrapper[4874]: I0122 11:56:44.125433 4874 scope.go:117] "RemoveContainer" containerID="58a69e8f9170bdd4dd90e6e773cd03089d2a6279398d2b1a2ba4ed87135be13a" Jan 22 11:56:45 crc kubenswrapper[4874]: I0122 11:56:45.134827 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"8527e378a489c65991df9b62c1d10fce3a020be8e3c0c8d8d62b128fb0466805"} Jan 22 11:57:26 crc kubenswrapper[4874]: I0122 11:57:26.440594 4874 generic.go:334] "Generic (PLEG): container finished" podID="e36b7db7-fd44-4552-8710-adb858c931c9" containerID="f373e74b504c6b60d88ff77f379cfc029b713c333e26f43f4c2808ee66deffc4" exitCode=0 Jan 22 11:57:26 crc kubenswrapper[4874]: I0122 11:57:26.440684 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"e36b7db7-fd44-4552-8710-adb858c931c9","Type":"ContainerDied","Data":"f373e74b504c6b60d88ff77f379cfc029b713c333e26f43f4c2808ee66deffc4"} Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.727301 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.927068 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-proxy-ca-bundles\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.927427 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-build-blob-cache\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.927448 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-run\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.927480 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-buildworkdir\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.927498 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pz6p\" (UniqueName: \"kubernetes.io/projected/e36b7db7-fd44-4552-8710-adb858c931c9-kube-api-access-2pz6p\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.928094 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.928484 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.928538 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-node-pullsecrets\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.928598 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.928634 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-system-configs\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.928660 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-ca-bundles\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.928920 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.928978 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-pull\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929181 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929278 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-root\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929285 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929320 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-buildcachedir\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929357 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-push\") pod \"e36b7db7-fd44-4552-8710-adb858c931c9\" (UID: \"e36b7db7-fd44-4552-8710-adb858c931c9\") " Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929741 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929757 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929767 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929776 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929785 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.929793 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e36b7db7-fd44-4552-8710-adb858c931c9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.931121 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.951584 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.951640 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:57:27 crc kubenswrapper[4874]: I0122 11:57:27.966613 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36b7db7-fd44-4552-8710-adb858c931c9-kube-api-access-2pz6p" (OuterVolumeSpecName: "kube-api-access-2pz6p") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "kube-api-access-2pz6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:57:28 crc kubenswrapper[4874]: I0122 11:57:28.031260 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pz6p\" (UniqueName: \"kubernetes.io/projected/e36b7db7-fd44-4552-8710-adb858c931c9-kube-api-access-2pz6p\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:28 crc kubenswrapper[4874]: I0122 11:57:28.031295 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:28 crc kubenswrapper[4874]: I0122 11:57:28.031309 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e36b7db7-fd44-4552-8710-adb858c931c9-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:28 crc kubenswrapper[4874]: I0122 11:57:28.031321 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/e36b7db7-fd44-4552-8710-adb858c931c9-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:28 crc kubenswrapper[4874]: I0122 11:57:28.182646 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:57:28 crc kubenswrapper[4874]: I0122 11:57:28.232982 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:28 crc kubenswrapper[4874]: I0122 11:57:28.457521 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"e36b7db7-fd44-4552-8710-adb858c931c9","Type":"ContainerDied","Data":"a455ec88ef39edfe840d2acca84b9a3be2dc686080708adaf4c9e9cbdfea499a"} Jan 22 11:57:28 crc kubenswrapper[4874]: I0122 11:57:28.457563 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a455ec88ef39edfe840d2acca84b9a3be2dc686080708adaf4c9e9cbdfea499a" Jan 22 11:57:28 crc kubenswrapper[4874]: I0122 11:57:28.457624 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 11:57:30 crc kubenswrapper[4874]: I0122 11:57:30.421952 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e36b7db7-fd44-4552-8710-adb858c931c9" (UID: "e36b7db7-fd44-4552-8710-adb858c931c9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:57:30 crc kubenswrapper[4874]: I0122 11:57:30.477110 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e36b7db7-fd44-4552-8710-adb858c931c9-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.179941 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 11:57:33 crc kubenswrapper[4874]: E0122 11:57:33.180489 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36b7db7-fd44-4552-8710-adb858c931c9" containerName="manage-dockerfile" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.180505 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36b7db7-fd44-4552-8710-adb858c931c9" containerName="manage-dockerfile" Jan 22 11:57:33 crc kubenswrapper[4874]: E0122 11:57:33.180529 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36b7db7-fd44-4552-8710-adb858c931c9" containerName="git-clone" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.180536 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36b7db7-fd44-4552-8710-adb858c931c9" containerName="git-clone" Jan 22 11:57:33 crc kubenswrapper[4874]: E0122 11:57:33.180549 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36b7db7-fd44-4552-8710-adb858c931c9" containerName="docker-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.180557 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36b7db7-fd44-4552-8710-adb858c931c9" containerName="docker-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.180672 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="e36b7db7-fd44-4552-8710-adb858c931c9" containerName="docker-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.181386 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.184263 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.184270 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.185563 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.190551 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jmkbp" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.206284 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.213800 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.213826 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.213864 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.213895 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s26q\" (UniqueName: \"kubernetes.io/projected/fe878840-b584-47fb-aca1-bdbdf8f90b43-kube-api-access-9s26q\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.213959 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildcachedir\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.213975 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.213993 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-push\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.214040 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-pull\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.214062 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-root\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.214085 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-run\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.214115 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildworkdir\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.214138 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-system-configs\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.315510 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.315631 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s26q\" (UniqueName: \"kubernetes.io/projected/fe878840-b584-47fb-aca1-bdbdf8f90b43-kube-api-access-9s26q\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.315704 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildcachedir\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.315756 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.315792 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildcachedir\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.315801 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-push\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.315976 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-pull\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.316655 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-root\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.316721 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-run\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.316749 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildworkdir\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.316794 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-system-configs\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.316863 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.316878 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.317058 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.317143 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-run\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.317068 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.317212 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildworkdir\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.317227 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.317357 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.317572 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-system-configs\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.317649 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-root\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.322624 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-push\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.322909 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-pull\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.343392 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s26q\" (UniqueName: \"kubernetes.io/projected/fe878840-b584-47fb-aca1-bdbdf8f90b43-kube-api-access-9s26q\") pod \"sg-core-1-build\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.502082 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 11:57:33 crc kubenswrapper[4874]: I0122 11:57:33.901908 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 11:57:34 crc kubenswrapper[4874]: I0122 11:57:34.505032 4874 generic.go:334] "Generic (PLEG): container finished" podID="fe878840-b584-47fb-aca1-bdbdf8f90b43" containerID="89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e" exitCode=0 Jan 22 11:57:34 crc kubenswrapper[4874]: I0122 11:57:34.505118 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"fe878840-b584-47fb-aca1-bdbdf8f90b43","Type":"ContainerDied","Data":"89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e"} Jan 22 11:57:34 crc kubenswrapper[4874]: I0122 11:57:34.505184 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"fe878840-b584-47fb-aca1-bdbdf8f90b43","Type":"ContainerStarted","Data":"24fd5a5e2c73d17a2c7b7ce83a3fc6b3dcc6015ed1e2a7fbe4eb45a26d0d8694"} Jan 22 11:57:35 crc kubenswrapper[4874]: I0122 11:57:35.513603 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"fe878840-b584-47fb-aca1-bdbdf8f90b43","Type":"ContainerStarted","Data":"f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60"} Jan 22 11:57:35 crc kubenswrapper[4874]: I0122 11:57:35.536015 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=2.535995184 podStartE2EDuration="2.535995184s" podCreationTimestamp="2026-01-22 11:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:57:35.532056113 +0000 UTC m=+1029.377127203" watchObservedRunningTime="2026-01-22 11:57:35.535995184 +0000 UTC m=+1029.381066254" Jan 22 11:57:43 crc kubenswrapper[4874]: I0122 11:57:43.605834 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 11:57:43 crc kubenswrapper[4874]: I0122 11:57:43.606535 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="fe878840-b584-47fb-aca1-bdbdf8f90b43" containerName="docker-build" containerID="cri-o://f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60" gracePeriod=30 Jan 22 11:57:43 crc kubenswrapper[4874]: I0122 11:57:43.999668 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_fe878840-b584-47fb-aca1-bdbdf8f90b43/docker-build/0.log" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.000996 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013584 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-proxy-ca-bundles\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013636 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-system-configs\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013683 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-pull\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013723 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-root\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013754 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s26q\" (UniqueName: \"kubernetes.io/projected/fe878840-b584-47fb-aca1-bdbdf8f90b43-kube-api-access-9s26q\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013801 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-node-pullsecrets\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013822 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-push\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013853 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildcachedir\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013881 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-blob-cache\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013936 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-ca-bundles\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013964 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-run\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.013988 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildworkdir\") pod \"fe878840-b584-47fb-aca1-bdbdf8f90b43\" (UID: \"fe878840-b584-47fb-aca1-bdbdf8f90b43\") " Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.014225 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.015173 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.015275 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.015379 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.015380 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.015453 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.016264 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.021290 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.028861 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.028955 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe878840-b584-47fb-aca1-bdbdf8f90b43-kube-api-access-9s26q" (OuterVolumeSpecName: "kube-api-access-9s26q") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "kube-api-access-9s26q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115334 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115377 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115389 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115416 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115429 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115440 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115451 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115463 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115474 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/fe878840-b584-47fb-aca1-bdbdf8f90b43-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.115485 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s26q\" (UniqueName: \"kubernetes.io/projected/fe878840-b584-47fb-aca1-bdbdf8f90b43-kube-api-access-9s26q\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.144996 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.216584 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.223632 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "fe878840-b584-47fb-aca1-bdbdf8f90b43" (UID: "fe878840-b584-47fb-aca1-bdbdf8f90b43"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.317885 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/fe878840-b584-47fb-aca1-bdbdf8f90b43-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.578190 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_fe878840-b584-47fb-aca1-bdbdf8f90b43/docker-build/0.log" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.578780 4874 generic.go:334] "Generic (PLEG): container finished" podID="fe878840-b584-47fb-aca1-bdbdf8f90b43" containerID="f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60" exitCode=1 Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.578837 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"fe878840-b584-47fb-aca1-bdbdf8f90b43","Type":"ContainerDied","Data":"f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60"} Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.578881 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"fe878840-b584-47fb-aca1-bdbdf8f90b43","Type":"ContainerDied","Data":"24fd5a5e2c73d17a2c7b7ce83a3fc6b3dcc6015ed1e2a7fbe4eb45a26d0d8694"} Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.578920 4874 scope.go:117] "RemoveContainer" containerID="f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.579158 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.627522 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.632826 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.645104 4874 scope.go:117] "RemoveContainer" containerID="89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.667297 4874 scope.go:117] "RemoveContainer" containerID="f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60" Jan 22 11:57:44 crc kubenswrapper[4874]: E0122 11:57:44.667706 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60\": container with ID starting with f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60 not found: ID does not exist" containerID="f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.667753 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60"} err="failed to get container status \"f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60\": rpc error: code = NotFound desc = could not find container \"f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60\": container with ID starting with f11f8a3f17d73f5c135bc5c8ab5d78fb1685b97c2cb1d7e7bfa5d902cafa8c60 not found: ID does not exist" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.667780 4874 scope.go:117] "RemoveContainer" containerID="89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e" Jan 22 11:57:44 crc kubenswrapper[4874]: E0122 11:57:44.668163 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e\": container with ID starting with 89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e not found: ID does not exist" containerID="89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.668226 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e"} err="failed to get container status \"89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e\": rpc error: code = NotFound desc = could not find container \"89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e\": container with ID starting with 89564f4dbf9a2d19321b838663656b209f8663a0ed711a85fb6ba3eae0a0748e not found: ID does not exist" Jan 22 11:57:44 crc kubenswrapper[4874]: I0122 11:57:44.726219 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe878840-b584-47fb-aca1-bdbdf8f90b43" path="/var/lib/kubelet/pods/fe878840-b584-47fb-aca1-bdbdf8f90b43/volumes" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.711745 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 22 11:57:45 crc kubenswrapper[4874]: E0122 11:57:45.712247 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe878840-b584-47fb-aca1-bdbdf8f90b43" containerName="manage-dockerfile" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.712260 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe878840-b584-47fb-aca1-bdbdf8f90b43" containerName="manage-dockerfile" Jan 22 11:57:45 crc kubenswrapper[4874]: E0122 11:57:45.712283 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe878840-b584-47fb-aca1-bdbdf8f90b43" containerName="docker-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.712289 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe878840-b584-47fb-aca1-bdbdf8f90b43" containerName="docker-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.712387 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe878840-b584-47fb-aca1-bdbdf8f90b43" containerName="docker-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.713297 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.715349 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jmkbp" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.715622 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.715964 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.716418 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743098 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildcachedir\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743175 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-run\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743218 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743275 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-pull\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743322 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptvv\" (UniqueName: \"kubernetes.io/projected/d9c696c3-8c17-45a3-93d2-75801fa0bff4-kube-api-access-nptvv\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743387 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743490 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-system-configs\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743520 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743560 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildworkdir\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743713 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-push\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743921 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-root\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.743958 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.745911 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845075 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildworkdir\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845155 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-push\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845227 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-root\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845250 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845276 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildcachedir\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845306 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-run\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845332 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845357 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-pull\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845386 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nptvv\" (UniqueName: \"kubernetes.io/projected/d9c696c3-8c17-45a3-93d2-75801fa0bff4-kube-api-access-nptvv\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845498 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845516 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildworkdir\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845513 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildcachedir\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845559 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845642 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-system-configs\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845687 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.845996 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-root\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.846123 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.846371 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-system-configs\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.846787 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.847074 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-run\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.847303 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.851280 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-pull\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.853897 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-push\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:45 crc kubenswrapper[4874]: I0122 11:57:45.871200 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nptvv\" (UniqueName: \"kubernetes.io/projected/d9c696c3-8c17-45a3-93d2-75801fa0bff4-kube-api-access-nptvv\") pod \"sg-core-2-build\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " pod="service-telemetry/sg-core-2-build" Jan 22 11:57:46 crc kubenswrapper[4874]: I0122 11:57:46.027149 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 11:57:46 crc kubenswrapper[4874]: I0122 11:57:46.272261 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 22 11:57:46 crc kubenswrapper[4874]: I0122 11:57:46.594752 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d9c696c3-8c17-45a3-93d2-75801fa0bff4","Type":"ContainerStarted","Data":"597c44687b703277dcd83cef195a6ae697ec9676e1137316d5c3929b8b80bf65"} Jan 22 11:57:46 crc kubenswrapper[4874]: I0122 11:57:46.594799 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d9c696c3-8c17-45a3-93d2-75801fa0bff4","Type":"ContainerStarted","Data":"eef6f96df9cbbdb33314fde51a3910ff53d4c339428b3cdc79ffacacbb25cb6e"} Jan 22 11:57:47 crc kubenswrapper[4874]: I0122 11:57:47.603916 4874 generic.go:334] "Generic (PLEG): container finished" podID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerID="597c44687b703277dcd83cef195a6ae697ec9676e1137316d5c3929b8b80bf65" exitCode=0 Jan 22 11:57:47 crc kubenswrapper[4874]: I0122 11:57:47.604025 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d9c696c3-8c17-45a3-93d2-75801fa0bff4","Type":"ContainerDied","Data":"597c44687b703277dcd83cef195a6ae697ec9676e1137316d5c3929b8b80bf65"} Jan 22 11:57:48 crc kubenswrapper[4874]: I0122 11:57:48.611267 4874 generic.go:334] "Generic (PLEG): container finished" podID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerID="732e9d64e794a4557ed9a1689a240417d73a781922ccd0e1dfec357d6dc019bd" exitCode=0 Jan 22 11:57:48 crc kubenswrapper[4874]: I0122 11:57:48.611515 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d9c696c3-8c17-45a3-93d2-75801fa0bff4","Type":"ContainerDied","Data":"732e9d64e794a4557ed9a1689a240417d73a781922ccd0e1dfec357d6dc019bd"} Jan 22 11:57:48 crc kubenswrapper[4874]: I0122 11:57:48.639832 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_d9c696c3-8c17-45a3-93d2-75801fa0bff4/manage-dockerfile/0.log" Jan 22 11:57:49 crc kubenswrapper[4874]: I0122 11:57:49.620825 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d9c696c3-8c17-45a3-93d2-75801fa0bff4","Type":"ContainerStarted","Data":"99530f55f4edee2cf39ea2220ebb4c14a93218e16d5a2640d0c5c1ea60a0cb4c"} Jan 22 11:57:49 crc kubenswrapper[4874]: I0122 11:57:49.659291 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=4.6592577429999995 podStartE2EDuration="4.659257743s" podCreationTimestamp="2026-01-22 11:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:57:49.650440464 +0000 UTC m=+1043.495511604" watchObservedRunningTime="2026-01-22 11:57:49.659257743 +0000 UTC m=+1043.504328853" Jan 22 11:59:13 crc kubenswrapper[4874]: I0122 11:59:13.520476 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:59:13 crc kubenswrapper[4874]: I0122 11:59:13.521083 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:59:43 crc kubenswrapper[4874]: I0122 11:59:43.520607 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:59:43 crc kubenswrapper[4874]: I0122 11:59:43.521011 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.148496 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2"] Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.150511 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.155718 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.156007 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.169756 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2"] Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.311546 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b40805b8-a40b-4854-80b3-28916f4f8a43-secret-volume\") pod \"collect-profiles-29484720-8vrb2\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.311651 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4svdz\" (UniqueName: \"kubernetes.io/projected/b40805b8-a40b-4854-80b3-28916f4f8a43-kube-api-access-4svdz\") pod \"collect-profiles-29484720-8vrb2\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.311710 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b40805b8-a40b-4854-80b3-28916f4f8a43-config-volume\") pod \"collect-profiles-29484720-8vrb2\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.413181 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4svdz\" (UniqueName: \"kubernetes.io/projected/b40805b8-a40b-4854-80b3-28916f4f8a43-kube-api-access-4svdz\") pod \"collect-profiles-29484720-8vrb2\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.413316 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b40805b8-a40b-4854-80b3-28916f4f8a43-config-volume\") pod \"collect-profiles-29484720-8vrb2\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.413527 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b40805b8-a40b-4854-80b3-28916f4f8a43-secret-volume\") pod \"collect-profiles-29484720-8vrb2\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.417000 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b40805b8-a40b-4854-80b3-28916f4f8a43-config-volume\") pod \"collect-profiles-29484720-8vrb2\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.431942 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b40805b8-a40b-4854-80b3-28916f4f8a43-secret-volume\") pod \"collect-profiles-29484720-8vrb2\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.443534 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4svdz\" (UniqueName: \"kubernetes.io/projected/b40805b8-a40b-4854-80b3-28916f4f8a43-kube-api-access-4svdz\") pod \"collect-profiles-29484720-8vrb2\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.483000 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:00 crc kubenswrapper[4874]: I0122 12:00:00.724271 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2"] Jan 22 12:00:01 crc kubenswrapper[4874]: I0122 12:00:01.587553 4874 generic.go:334] "Generic (PLEG): container finished" podID="b40805b8-a40b-4854-80b3-28916f4f8a43" containerID="8a25148d438b262cdaed2dfa6ed49fd79b1992854289d94b46eaf3e9b7e53219" exitCode=0 Jan 22 12:00:01 crc kubenswrapper[4874]: I0122 12:00:01.587664 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" event={"ID":"b40805b8-a40b-4854-80b3-28916f4f8a43","Type":"ContainerDied","Data":"8a25148d438b262cdaed2dfa6ed49fd79b1992854289d94b46eaf3e9b7e53219"} Jan 22 12:00:01 crc kubenswrapper[4874]: I0122 12:00:01.587853 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" event={"ID":"b40805b8-a40b-4854-80b3-28916f4f8a43","Type":"ContainerStarted","Data":"68e5718356b4f1c5306fdfdec998346424b75624bf6b9b2e43a9ae045ce98780"} Jan 22 12:00:02 crc kubenswrapper[4874]: I0122 12:00:02.927546 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.046528 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b40805b8-a40b-4854-80b3-28916f4f8a43-secret-volume\") pod \"b40805b8-a40b-4854-80b3-28916f4f8a43\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.046590 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4svdz\" (UniqueName: \"kubernetes.io/projected/b40805b8-a40b-4854-80b3-28916f4f8a43-kube-api-access-4svdz\") pod \"b40805b8-a40b-4854-80b3-28916f4f8a43\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.046616 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b40805b8-a40b-4854-80b3-28916f4f8a43-config-volume\") pod \"b40805b8-a40b-4854-80b3-28916f4f8a43\" (UID: \"b40805b8-a40b-4854-80b3-28916f4f8a43\") " Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.047746 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40805b8-a40b-4854-80b3-28916f4f8a43-config-volume" (OuterVolumeSpecName: "config-volume") pod "b40805b8-a40b-4854-80b3-28916f4f8a43" (UID: "b40805b8-a40b-4854-80b3-28916f4f8a43"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.053285 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40805b8-a40b-4854-80b3-28916f4f8a43-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b40805b8-a40b-4854-80b3-28916f4f8a43" (UID: "b40805b8-a40b-4854-80b3-28916f4f8a43"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.068647 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40805b8-a40b-4854-80b3-28916f4f8a43-kube-api-access-4svdz" (OuterVolumeSpecName: "kube-api-access-4svdz") pod "b40805b8-a40b-4854-80b3-28916f4f8a43" (UID: "b40805b8-a40b-4854-80b3-28916f4f8a43"). InnerVolumeSpecName "kube-api-access-4svdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.147879 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b40805b8-a40b-4854-80b3-28916f4f8a43-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.148248 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b40805b8-a40b-4854-80b3-28916f4f8a43-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.148264 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4svdz\" (UniqueName: \"kubernetes.io/projected/b40805b8-a40b-4854-80b3-28916f4f8a43-kube-api-access-4svdz\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.601488 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" event={"ID":"b40805b8-a40b-4854-80b3-28916f4f8a43","Type":"ContainerDied","Data":"68e5718356b4f1c5306fdfdec998346424b75624bf6b9b2e43a9ae045ce98780"} Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.601530 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e5718356b4f1c5306fdfdec998346424b75624bf6b9b2e43a9ae045ce98780" Jan 22 12:00:03 crc kubenswrapper[4874]: I0122 12:00:03.601555 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2" Jan 22 12:00:13 crc kubenswrapper[4874]: I0122 12:00:13.520788 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:00:13 crc kubenswrapper[4874]: I0122 12:00:13.521424 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:00:13 crc kubenswrapper[4874]: I0122 12:00:13.521495 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:00:13 crc kubenswrapper[4874]: I0122 12:00:13.522194 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8527e378a489c65991df9b62c1d10fce3a020be8e3c0c8d8d62b128fb0466805"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:00:13 crc kubenswrapper[4874]: I0122 12:00:13.522281 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://8527e378a489c65991df9b62c1d10fce3a020be8e3c0c8d8d62b128fb0466805" gracePeriod=600 Jan 22 12:00:14 crc kubenswrapper[4874]: I0122 12:00:14.696168 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="8527e378a489c65991df9b62c1d10fce3a020be8e3c0c8d8d62b128fb0466805" exitCode=0 Jan 22 12:00:14 crc kubenswrapper[4874]: I0122 12:00:14.696355 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"8527e378a489c65991df9b62c1d10fce3a020be8e3c0c8d8d62b128fb0466805"} Jan 22 12:00:14 crc kubenswrapper[4874]: I0122 12:00:14.696595 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"75a7548055039130cae1c6ba4be8efe015a8bd6db752337a976c1be606300781"} Jan 22 12:00:14 crc kubenswrapper[4874]: I0122 12:00:14.696630 4874 scope.go:117] "RemoveContainer" containerID="2554a3567b106d7ded370e89a12bf13dafb3f02d930bcb0aa478a1f4bf2cf32b" Jan 22 12:00:54 crc kubenswrapper[4874]: I0122 12:00:54.128758 4874 patch_prober.go:28] interesting pod/console-operator-58897d9998-mftbv container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 12:00:54 crc kubenswrapper[4874]: I0122 12:00:54.132703 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-mftbv" podUID="8a67b9a8-ad8a-40e3-955c-53aed07a9140" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 12:01:24 crc kubenswrapper[4874]: I0122 12:01:24.204781 4874 generic.go:334] "Generic (PLEG): container finished" podID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerID="99530f55f4edee2cf39ea2220ebb4c14a93218e16d5a2640d0c5c1ea60a0cb4c" exitCode=0 Jan 22 12:01:24 crc kubenswrapper[4874]: I0122 12:01:24.204896 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d9c696c3-8c17-45a3-93d2-75801fa0bff4","Type":"ContainerDied","Data":"99530f55f4edee2cf39ea2220ebb4c14a93218e16d5a2640d0c5c1ea60a0cb4c"} Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.512694 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600129 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-node-pullsecrets\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600253 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-blob-cache\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600256 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600300 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-proxy-ca-bundles\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600339 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-push\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600376 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildcachedir\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600450 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-root\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600488 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-system-configs\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600513 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-pull\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600542 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildworkdir\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600576 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-run\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600607 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-ca-bundles\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600644 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nptvv\" (UniqueName: \"kubernetes.io/projected/d9c696c3-8c17-45a3-93d2-75801fa0bff4-kube-api-access-nptvv\") pod \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\" (UID: \"d9c696c3-8c17-45a3-93d2-75801fa0bff4\") " Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.600837 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.603216 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.603259 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.603749 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.603888 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.604552 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.609853 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c696c3-8c17-45a3-93d2-75801fa0bff4-kube-api-access-nptvv" (OuterVolumeSpecName: "kube-api-access-nptvv") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "kube-api-access-nptvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.610733 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.615201 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.618360 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.701776 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.701811 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.701821 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.701829 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.701837 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d9c696c3-8c17-45a3-93d2-75801fa0bff4-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.701846 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.701854 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.701865 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.701873 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nptvv\" (UniqueName: \"kubernetes.io/projected/d9c696c3-8c17-45a3-93d2-75801fa0bff4-kube-api-access-nptvv\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:25 crc kubenswrapper[4874]: I0122 12:01:25.967585 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:01:26 crc kubenswrapper[4874]: I0122 12:01:26.005238 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:26 crc kubenswrapper[4874]: I0122 12:01:26.224987 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"d9c696c3-8c17-45a3-93d2-75801fa0bff4","Type":"ContainerDied","Data":"eef6f96df9cbbdb33314fde51a3910ff53d4c339428b3cdc79ffacacbb25cb6e"} Jan 22 12:01:26 crc kubenswrapper[4874]: I0122 12:01:26.225045 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef6f96df9cbbdb33314fde51a3910ff53d4c339428b3cdc79ffacacbb25cb6e" Jan 22 12:01:26 crc kubenswrapper[4874]: I0122 12:01:26.225142 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:01:28 crc kubenswrapper[4874]: I0122 12:01:28.891028 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d9c696c3-8c17-45a3-93d2-75801fa0bff4" (UID: "d9c696c3-8c17-45a3-93d2-75801fa0bff4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:01:28 crc kubenswrapper[4874]: I0122 12:01:28.957127 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d9c696c3-8c17-45a3-93d2-75801fa0bff4-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.307372 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:01:30 crc kubenswrapper[4874]: E0122 12:01:30.308350 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerName="manage-dockerfile" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.308383 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerName="manage-dockerfile" Jan 22 12:01:30 crc kubenswrapper[4874]: E0122 12:01:30.308451 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40805b8-a40b-4854-80b3-28916f4f8a43" containerName="collect-profiles" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.308469 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40805b8-a40b-4854-80b3-28916f4f8a43" containerName="collect-profiles" Jan 22 12:01:30 crc kubenswrapper[4874]: E0122 12:01:30.308503 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerName="docker-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.308520 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerName="docker-build" Jan 22 12:01:30 crc kubenswrapper[4874]: E0122 12:01:30.308549 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerName="git-clone" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.308565 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerName="git-clone" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.308800 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40805b8-a40b-4854-80b3-28916f4f8a43" containerName="collect-profiles" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.308848 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c696c3-8c17-45a3-93d2-75801fa0bff4" containerName="docker-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.310244 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.312947 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.315239 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.315537 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jmkbp" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.315611 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.340360 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377208 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377290 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377378 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-pull\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377537 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377644 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377692 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377758 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr6mf\" (UniqueName: \"kubernetes.io/projected/800f107b-2838-4231-b18a-b79d55fb8462-kube-api-access-mr6mf\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377799 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377831 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377865 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-push\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377901 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.377931 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479070 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-pull\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479159 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479218 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479252 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479291 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr6mf\" (UniqueName: \"kubernetes.io/projected/800f107b-2838-4231-b18a-b79d55fb8462-kube-api-access-mr6mf\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479326 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479369 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479462 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-push\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479512 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479547 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479613 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479683 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.479972 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.480179 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.480302 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.480582 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.481116 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.481123 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.481928 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.481943 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.481989 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.486697 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-pull\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.488360 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-push\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.514675 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr6mf\" (UniqueName: \"kubernetes.io/projected/800f107b-2838-4231-b18a-b79d55fb8462-kube-api-access-mr6mf\") pod \"sg-bridge-1-build\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.640745 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:30 crc kubenswrapper[4874]: I0122 12:01:30.943761 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:01:31 crc kubenswrapper[4874]: I0122 12:01:31.271919 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"800f107b-2838-4231-b18a-b79d55fb8462","Type":"ContainerStarted","Data":"a99c233f6fa3271be7db376911a5bd86bbfc74b03e87481eec91c179785d49bb"} Jan 22 12:01:32 crc kubenswrapper[4874]: I0122 12:01:32.290588 4874 generic.go:334] "Generic (PLEG): container finished" podID="800f107b-2838-4231-b18a-b79d55fb8462" containerID="0de26c01cefe3d14fbdb3b1a34fd21d6c52587de7238901147f563699b679d87" exitCode=0 Jan 22 12:01:32 crc kubenswrapper[4874]: I0122 12:01:32.290656 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"800f107b-2838-4231-b18a-b79d55fb8462","Type":"ContainerDied","Data":"0de26c01cefe3d14fbdb3b1a34fd21d6c52587de7238901147f563699b679d87"} Jan 22 12:01:33 crc kubenswrapper[4874]: I0122 12:01:33.300223 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"800f107b-2838-4231-b18a-b79d55fb8462","Type":"ContainerStarted","Data":"bc11c6aeda8d6c6bd381ba9d5974f75e7324fcfb45d919a0312312b2bb9e4228"} Jan 22 12:01:33 crc kubenswrapper[4874]: I0122 12:01:33.336973 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.336954468 podStartE2EDuration="3.336954468s" podCreationTimestamp="2026-01-22 12:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:01:33.33243415 +0000 UTC m=+1267.177505240" watchObservedRunningTime="2026-01-22 12:01:33.336954468 +0000 UTC m=+1267.182025548" Jan 22 12:01:40 crc kubenswrapper[4874]: I0122 12:01:40.616543 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:01:40 crc kubenswrapper[4874]: I0122 12:01:40.617119 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="800f107b-2838-4231-b18a-b79d55fb8462" containerName="docker-build" containerID="cri-o://bc11c6aeda8d6c6bd381ba9d5974f75e7324fcfb45d919a0312312b2bb9e4228" gracePeriod=30 Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.366852 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_800f107b-2838-4231-b18a-b79d55fb8462/docker-build/0.log" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.367781 4874 generic.go:334] "Generic (PLEG): container finished" podID="800f107b-2838-4231-b18a-b79d55fb8462" containerID="bc11c6aeda8d6c6bd381ba9d5974f75e7324fcfb45d919a0312312b2bb9e4228" exitCode=1 Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.367813 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"800f107b-2838-4231-b18a-b79d55fb8462","Type":"ContainerDied","Data":"bc11c6aeda8d6c6bd381ba9d5974f75e7324fcfb45d919a0312312b2bb9e4228"} Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.568386 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_800f107b-2838-4231-b18a-b79d55fb8462/docker-build/0.log" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.569057 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.745863 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-push\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.745953 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr6mf\" (UniqueName: \"kubernetes.io/projected/800f107b-2838-4231-b18a-b79d55fb8462-kube-api-access-mr6mf\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746034 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-ca-bundles\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746077 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-buildcachedir\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746138 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-proxy-ca-bundles\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746180 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-system-configs\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746226 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-pull\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746217 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746319 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-build-blob-cache\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746702 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-node-pullsecrets\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746752 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-buildworkdir\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746789 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-run\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746813 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.746826 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-root\") pod \"800f107b-2838-4231-b18a-b79d55fb8462\" (UID: \"800f107b-2838-4231-b18a-b79d55fb8462\") " Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.747470 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.747495 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/800f107b-2838-4231-b18a-b79d55fb8462-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.747543 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.747589 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.748352 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.748950 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.749959 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.769960 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.770555 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800f107b-2838-4231-b18a-b79d55fb8462-kube-api-access-mr6mf" (OuterVolumeSpecName: "kube-api-access-mr6mf") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "kube-api-access-mr6mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.771540 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.845346 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.848309 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.848374 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.848390 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.848422 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.848439 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/800f107b-2838-4231-b18a-b79d55fb8462-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.848454 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr6mf\" (UniqueName: \"kubernetes.io/projected/800f107b-2838-4231-b18a-b79d55fb8462-kube-api-access-mr6mf\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.848465 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.848475 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:41 crc kubenswrapper[4874]: I0122 12:01:41.848488 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/800f107b-2838-4231-b18a-b79d55fb8462-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.149530 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "800f107b-2838-4231-b18a-b79d55fb8462" (UID: "800f107b-2838-4231-b18a-b79d55fb8462"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.152829 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/800f107b-2838-4231-b18a-b79d55fb8462-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.264100 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 22 12:01:42 crc kubenswrapper[4874]: E0122 12:01:42.264463 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800f107b-2838-4231-b18a-b79d55fb8462" containerName="manage-dockerfile" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.264485 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="800f107b-2838-4231-b18a-b79d55fb8462" containerName="manage-dockerfile" Jan 22 12:01:42 crc kubenswrapper[4874]: E0122 12:01:42.264501 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800f107b-2838-4231-b18a-b79d55fb8462" containerName="docker-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.264510 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="800f107b-2838-4231-b18a-b79d55fb8462" containerName="docker-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.264681 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="800f107b-2838-4231-b18a-b79d55fb8462" containerName="docker-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.265815 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.268281 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.268527 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.268979 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.278144 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.373421 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_800f107b-2838-4231-b18a-b79d55fb8462/docker-build/0.log" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.374039 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"800f107b-2838-4231-b18a-b79d55fb8462","Type":"ContainerDied","Data":"a99c233f6fa3271be7db376911a5bd86bbfc74b03e87481eec91c179785d49bb"} Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.374097 4874 scope.go:117] "RemoveContainer" containerID="bc11c6aeda8d6c6bd381ba9d5974f75e7324fcfb45d919a0312312b2bb9e4228" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.374132 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.433906 4874 scope.go:117] "RemoveContainer" containerID="0de26c01cefe3d14fbdb3b1a34fd21d6c52587de7238901147f563699b679d87" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.438899 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.447949 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456373 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-push\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456426 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456451 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456468 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456486 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-pull\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456501 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456575 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456591 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456609 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456631 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456651 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.456751 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hfjz\" (UniqueName: \"kubernetes.io/projected/c087bdb2-7a13-44f4-a3e8-d023752790b1-kube-api-access-4hfjz\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.557822 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.557908 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.557947 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.557996 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558033 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558022 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558079 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hfjz\" (UniqueName: \"kubernetes.io/projected/c087bdb2-7a13-44f4-a3e8-d023752790b1-kube-api-access-4hfjz\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558162 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-push\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558209 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558252 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558283 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558311 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-pull\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558346 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558808 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.558989 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.559500 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.559896 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.560097 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.560242 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.560495 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.561794 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.563599 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-push\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.567074 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-pull\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.590687 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hfjz\" (UniqueName: \"kubernetes.io/projected/c087bdb2-7a13-44f4-a3e8-d023752790b1-kube-api-access-4hfjz\") pod \"sg-bridge-2-build\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.727210 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800f107b-2838-4231-b18a-b79d55fb8462" path="/var/lib/kubelet/pods/800f107b-2838-4231-b18a-b79d55fb8462/volumes" Jan 22 12:01:42 crc kubenswrapper[4874]: I0122 12:01:42.885440 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:01:43 crc kubenswrapper[4874]: I0122 12:01:43.153636 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 22 12:01:43 crc kubenswrapper[4874]: I0122 12:01:43.382959 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c087bdb2-7a13-44f4-a3e8-d023752790b1","Type":"ContainerStarted","Data":"03c145c6ca77e91abe6da3eee50916d0faf5143179dbc4bb6305f384059014cb"} Jan 22 12:01:44 crc kubenswrapper[4874]: I0122 12:01:44.390431 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c087bdb2-7a13-44f4-a3e8-d023752790b1","Type":"ContainerStarted","Data":"bec974edd7c85b0ec7b6a166376c0bec44230e25fd3b1650be8945a58ee8234b"} Jan 22 12:01:45 crc kubenswrapper[4874]: I0122 12:01:45.400174 4874 generic.go:334] "Generic (PLEG): container finished" podID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerID="bec974edd7c85b0ec7b6a166376c0bec44230e25fd3b1650be8945a58ee8234b" exitCode=0 Jan 22 12:01:45 crc kubenswrapper[4874]: I0122 12:01:45.400240 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c087bdb2-7a13-44f4-a3e8-d023752790b1","Type":"ContainerDied","Data":"bec974edd7c85b0ec7b6a166376c0bec44230e25fd3b1650be8945a58ee8234b"} Jan 22 12:01:46 crc kubenswrapper[4874]: I0122 12:01:46.413169 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c087bdb2-7a13-44f4-a3e8-d023752790b1","Type":"ContainerDied","Data":"23590652507fa8f13559f30aef24b304be84ab519306c12e229704bd157a03c9"} Jan 22 12:01:46 crc kubenswrapper[4874]: I0122 12:01:46.412995 4874 generic.go:334] "Generic (PLEG): container finished" podID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerID="23590652507fa8f13559f30aef24b304be84ab519306c12e229704bd157a03c9" exitCode=0 Jan 22 12:01:46 crc kubenswrapper[4874]: I0122 12:01:46.475504 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_c087bdb2-7a13-44f4-a3e8-d023752790b1/manage-dockerfile/0.log" Jan 22 12:01:47 crc kubenswrapper[4874]: I0122 12:01:47.426080 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c087bdb2-7a13-44f4-a3e8-d023752790b1","Type":"ContainerStarted","Data":"03fce667683d464b8333babfad9386c271cabdec4aa5c2556892d60d9307683d"} Jan 22 12:01:47 crc kubenswrapper[4874]: I0122 12:01:47.475937 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.475909836 podStartE2EDuration="5.475909836s" podCreationTimestamp="2026-01-22 12:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:01:47.467122266 +0000 UTC m=+1281.312193396" watchObservedRunningTime="2026-01-22 12:01:47.475909836 +0000 UTC m=+1281.320980936" Jan 22 12:02:39 crc kubenswrapper[4874]: I0122 12:02:39.855596 4874 generic.go:334] "Generic (PLEG): container finished" podID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerID="03fce667683d464b8333babfad9386c271cabdec4aa5c2556892d60d9307683d" exitCode=0 Jan 22 12:02:39 crc kubenswrapper[4874]: I0122 12:02:39.856265 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c087bdb2-7a13-44f4-a3e8-d023752790b1","Type":"ContainerDied","Data":"03fce667683d464b8333babfad9386c271cabdec4aa5c2556892d60d9307683d"} Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.098330 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177459 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-root\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177539 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-pull\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177585 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-node-pullsecrets\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177623 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-ca-bundles\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177666 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildcachedir\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177700 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildworkdir\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177728 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-blob-cache\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177762 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-push\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177793 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hfjz\" (UniqueName: \"kubernetes.io/projected/c087bdb2-7a13-44f4-a3e8-d023752790b1-kube-api-access-4hfjz\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177829 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-proxy-ca-bundles\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177864 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-run\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.177917 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-system-configs\") pod \"c087bdb2-7a13-44f4-a3e8-d023752790b1\" (UID: \"c087bdb2-7a13-44f4-a3e8-d023752790b1\") " Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.178600 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.178704 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.179557 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.179727 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.180975 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.181640 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.183933 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.184091 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.184149 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.184206 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c087bdb2-7a13-44f4-a3e8-d023752790b1-kube-api-access-4hfjz" (OuterVolumeSpecName: "kube-api-access-4hfjz") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "kube-api-access-4hfjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278836 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278880 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278897 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278908 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278919 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278932 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278943 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/c087bdb2-7a13-44f4-a3e8-d023752790b1-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278953 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hfjz\" (UniqueName: \"kubernetes.io/projected/c087bdb2-7a13-44f4-a3e8-d023752790b1-kube-api-access-4hfjz\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278968 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.278979 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.287920 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.380956 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.877446 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c087bdb2-7a13-44f4-a3e8-d023752790b1","Type":"ContainerDied","Data":"03c145c6ca77e91abe6da3eee50916d0faf5143179dbc4bb6305f384059014cb"} Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.877556 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c145c6ca77e91abe6da3eee50916d0faf5143179dbc4bb6305f384059014cb" Jan 22 12:02:41 crc kubenswrapper[4874]: I0122 12:02:41.877583 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:02:42 crc kubenswrapper[4874]: I0122 12:02:42.043008 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c087bdb2-7a13-44f4-a3e8-d023752790b1" (UID: "c087bdb2-7a13-44f4-a3e8-d023752790b1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:42 crc kubenswrapper[4874]: I0122 12:02:42.091753 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c087bdb2-7a13-44f4-a3e8-d023752790b1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:43 crc kubenswrapper[4874]: I0122 12:02:43.520040 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:02:43 crc kubenswrapper[4874]: I0122 12:02:43.520124 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.510006 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:02:45 crc kubenswrapper[4874]: E0122 12:02:45.510549 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerName="manage-dockerfile" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.510561 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerName="manage-dockerfile" Jan 22 12:02:45 crc kubenswrapper[4874]: E0122 12:02:45.510572 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerName="git-clone" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.510578 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerName="git-clone" Jan 22 12:02:45 crc kubenswrapper[4874]: E0122 12:02:45.510593 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerName="docker-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.510599 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerName="docker-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.510692 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c087bdb2-7a13-44f4-a3e8-d023752790b1" containerName="docker-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.511251 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.519842 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.520487 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.520779 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jmkbp" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.521608 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.537599 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642472 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642512 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642530 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642553 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642582 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642605 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642626 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642660 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642678 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642698 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndv4\" (UniqueName: \"kubernetes.io/projected/d7593be4-689b-4c31-a19d-e92e0532ff7a-kube-api-access-rndv4\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642716 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.642736 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.743746 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.743816 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.743867 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndv4\" (UniqueName: \"kubernetes.io/projected/d7593be4-689b-4c31-a19d-e92e0532ff7a-kube-api-access-rndv4\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.743903 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.743945 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744049 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744084 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744119 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744158 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744210 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744253 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744298 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744366 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744444 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744481 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744534 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744241 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.744746 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.745076 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.745147 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.745189 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.753136 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.753143 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.763466 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndv4\" (UniqueName: \"kubernetes.io/projected/d7593be4-689b-4c31-a19d-e92e0532ff7a-kube-api-access-rndv4\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:45 crc kubenswrapper[4874]: I0122 12:02:45.825896 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:46 crc kubenswrapper[4874]: I0122 12:02:46.346390 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:02:46 crc kubenswrapper[4874]: I0122 12:02:46.936204 4874 generic.go:334] "Generic (PLEG): container finished" podID="d7593be4-689b-4c31-a19d-e92e0532ff7a" containerID="cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf" exitCode=0 Jan 22 12:02:46 crc kubenswrapper[4874]: I0122 12:02:46.936287 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"d7593be4-689b-4c31-a19d-e92e0532ff7a","Type":"ContainerDied","Data":"cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf"} Jan 22 12:02:46 crc kubenswrapper[4874]: I0122 12:02:46.936378 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"d7593be4-689b-4c31-a19d-e92e0532ff7a","Type":"ContainerStarted","Data":"881208a8d8e40a48481311cf4b043654d5ed9b3946807ac3ddc58f75310baeea"} Jan 22 12:02:47 crc kubenswrapper[4874]: I0122 12:02:47.945290 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"d7593be4-689b-4c31-a19d-e92e0532ff7a","Type":"ContainerStarted","Data":"4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872"} Jan 22 12:02:47 crc kubenswrapper[4874]: I0122 12:02:47.991379 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=2.991346356 podStartE2EDuration="2.991346356s" podCreationTimestamp="2026-01-22 12:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:02:47.981209105 +0000 UTC m=+1341.826280225" watchObservedRunningTime="2026-01-22 12:02:47.991346356 +0000 UTC m=+1341.836417466" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.261843 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.262633 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="d7593be4-689b-4c31-a19d-e92e0532ff7a" containerName="docker-build" containerID="cri-o://4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872" gracePeriod=30 Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.675127 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_d7593be4-689b-4c31-a19d-e92e0532ff7a/docker-build/0.log" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.676204 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800296 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildcachedir\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800378 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-root\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800464 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-proxy-ca-bundles\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800510 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-ca-bundles\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800521 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800542 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-push\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800708 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-blob-cache\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800759 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildworkdir\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800809 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rndv4\" (UniqueName: \"kubernetes.io/projected/d7593be4-689b-4c31-a19d-e92e0532ff7a-kube-api-access-rndv4\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800843 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-run\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800897 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-pull\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.800975 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-system-configs\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.801012 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-node-pullsecrets\") pod \"d7593be4-689b-4c31-a19d-e92e0532ff7a\" (UID: \"d7593be4-689b-4c31-a19d-e92e0532ff7a\") " Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.801485 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.801829 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.803860 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.804234 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.804272 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.804260 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.805020 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.810826 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7593be4-689b-4c31-a19d-e92e0532ff7a-kube-api-access-rndv4" (OuterVolumeSpecName: "kube-api-access-rndv4") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "kube-api-access-rndv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.813077 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.818670 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.878005 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903367 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903605 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903615 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rndv4\" (UniqueName: \"kubernetes.io/projected/d7593be4-689b-4c31-a19d-e92e0532ff7a-kube-api-access-rndv4\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903625 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903633 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903642 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903651 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d7593be4-689b-4c31-a19d-e92e0532ff7a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903659 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903670 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/d7593be4-689b-4c31-a19d-e92e0532ff7a-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:56 crc kubenswrapper[4874]: I0122 12:02:56.903677 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7593be4-689b-4c31-a19d-e92e0532ff7a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.005337 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_d7593be4-689b-4c31-a19d-e92e0532ff7a/docker-build/0.log" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.005884 4874 generic.go:334] "Generic (PLEG): container finished" podID="d7593be4-689b-4c31-a19d-e92e0532ff7a" containerID="4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872" exitCode=1 Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.005924 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"d7593be4-689b-4c31-a19d-e92e0532ff7a","Type":"ContainerDied","Data":"4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872"} Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.005946 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.005954 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"d7593be4-689b-4c31-a19d-e92e0532ff7a","Type":"ContainerDied","Data":"881208a8d8e40a48481311cf4b043654d5ed9b3946807ac3ddc58f75310baeea"} Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.005974 4874 scope.go:117] "RemoveContainer" containerID="4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.037308 4874 scope.go:117] "RemoveContainer" containerID="cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.066087 4874 scope.go:117] "RemoveContainer" containerID="4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872" Jan 22 12:02:57 crc kubenswrapper[4874]: E0122 12:02:57.066618 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872\": container with ID starting with 4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872 not found: ID does not exist" containerID="4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.066672 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872"} err="failed to get container status \"4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872\": rpc error: code = NotFound desc = could not find container \"4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872\": container with ID starting with 4838f8e664a6c743d5586c810fc107f1bf65bc6fac610bf0b92e38da2d9d3872 not found: ID does not exist" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.066705 4874 scope.go:117] "RemoveContainer" containerID="cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf" Jan 22 12:02:57 crc kubenswrapper[4874]: E0122 12:02:57.067341 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf\": container with ID starting with cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf not found: ID does not exist" containerID="cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.067382 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf"} err="failed to get container status \"cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf\": rpc error: code = NotFound desc = could not find container \"cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf\": container with ID starting with cc01ca7e6056bf37c999ae4b7cc25e8543e1c445265cf7eaf584235b78c8febf not found: ID does not exist" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.351376 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d7593be4-689b-4c31-a19d-e92e0532ff7a" (UID: "d7593be4-689b-4c31-a19d-e92e0532ff7a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.409223 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d7593be4-689b-4c31-a19d-e92e0532ff7a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.667480 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.675518 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.904016 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 22 12:02:57 crc kubenswrapper[4874]: E0122 12:02:57.904268 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7593be4-689b-4c31-a19d-e92e0532ff7a" containerName="manage-dockerfile" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.904283 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7593be4-689b-4c31-a19d-e92e0532ff7a" containerName="manage-dockerfile" Jan 22 12:02:57 crc kubenswrapper[4874]: E0122 12:02:57.904298 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7593be4-689b-4c31-a19d-e92e0532ff7a" containerName="docker-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.904307 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7593be4-689b-4c31-a19d-e92e0532ff7a" containerName="docker-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.904455 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7593be4-689b-4c31-a19d-e92e0532ff7a" containerName="docker-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.905413 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.907129 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.907422 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jmkbp" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.907879 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.910719 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918602 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918640 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918661 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918693 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918727 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918749 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918779 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918816 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz6zp\" (UniqueName: \"kubernetes.io/projected/0930f68b-3161-43da-80c9-440cf31d98b9-kube-api-access-nz6zp\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918852 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918872 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918907 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.918946 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:57 crc kubenswrapper[4874]: I0122 12:02:57.937178 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020358 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020519 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020565 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020600 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020631 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020679 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020724 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020754 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020832 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.020993 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.021268 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.021424 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz6zp\" (UniqueName: \"kubernetes.io/projected/0930f68b-3161-43da-80c9-440cf31d98b9-kube-api-access-nz6zp\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.021494 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.021585 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.021685 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.021750 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.021789 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.021888 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.022164 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.022163 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.022622 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.024885 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.027094 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.054663 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz6zp\" (UniqueName: \"kubernetes.io/projected/0930f68b-3161-43da-80c9-440cf31d98b9-kube-api-access-nz6zp\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.222570 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.550546 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 22 12:02:58 crc kubenswrapper[4874]: W0122 12:02:58.557545 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0930f68b_3161_43da_80c9_440cf31d98b9.slice/crio-886d9f0c67a3a4c8d14ceb141fd085df974e37449fd7e5219c23a2101e17a594 WatchSource:0}: Error finding container 886d9f0c67a3a4c8d14ceb141fd085df974e37449fd7e5219c23a2101e17a594: Status 404 returned error can't find the container with id 886d9f0c67a3a4c8d14ceb141fd085df974e37449fd7e5219c23a2101e17a594 Jan 22 12:02:58 crc kubenswrapper[4874]: I0122 12:02:58.724623 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7593be4-689b-4c31-a19d-e92e0532ff7a" path="/var/lib/kubelet/pods/d7593be4-689b-4c31-a19d-e92e0532ff7a/volumes" Jan 22 12:02:59 crc kubenswrapper[4874]: I0122 12:02:59.027267 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0930f68b-3161-43da-80c9-440cf31d98b9","Type":"ContainerStarted","Data":"d5c3e73264960ac0886ae573a444480717c77fc3ce266bc2ffb12054f1170f02"} Jan 22 12:02:59 crc kubenswrapper[4874]: I0122 12:02:59.027333 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0930f68b-3161-43da-80c9-440cf31d98b9","Type":"ContainerStarted","Data":"886d9f0c67a3a4c8d14ceb141fd085df974e37449fd7e5219c23a2101e17a594"} Jan 22 12:03:00 crc kubenswrapper[4874]: I0122 12:03:00.037788 4874 generic.go:334] "Generic (PLEG): container finished" podID="0930f68b-3161-43da-80c9-440cf31d98b9" containerID="d5c3e73264960ac0886ae573a444480717c77fc3ce266bc2ffb12054f1170f02" exitCode=0 Jan 22 12:03:00 crc kubenswrapper[4874]: I0122 12:03:00.037834 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0930f68b-3161-43da-80c9-440cf31d98b9","Type":"ContainerDied","Data":"d5c3e73264960ac0886ae573a444480717c77fc3ce266bc2ffb12054f1170f02"} Jan 22 12:03:01 crc kubenswrapper[4874]: I0122 12:03:01.046527 4874 generic.go:334] "Generic (PLEG): container finished" podID="0930f68b-3161-43da-80c9-440cf31d98b9" containerID="ecca9b0c68b8c5dab8b5cbc8e453ab9979273f14bca7c42e50e718db90057883" exitCode=0 Jan 22 12:03:01 crc kubenswrapper[4874]: I0122 12:03:01.047049 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0930f68b-3161-43da-80c9-440cf31d98b9","Type":"ContainerDied","Data":"ecca9b0c68b8c5dab8b5cbc8e453ab9979273f14bca7c42e50e718db90057883"} Jan 22 12:03:01 crc kubenswrapper[4874]: I0122 12:03:01.099169 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_0930f68b-3161-43da-80c9-440cf31d98b9/manage-dockerfile/0.log" Jan 22 12:03:02 crc kubenswrapper[4874]: I0122 12:03:02.057635 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0930f68b-3161-43da-80c9-440cf31d98b9","Type":"ContainerStarted","Data":"e02bf90168e7d467aa3f3480ae95da68a8df2550b54ab56b09bd571cb9315ecd"} Jan 22 12:03:02 crc kubenswrapper[4874]: I0122 12:03:02.090156 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.090137498 podStartE2EDuration="5.090137498s" podCreationTimestamp="2026-01-22 12:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:03:02.08727273 +0000 UTC m=+1355.932343850" watchObservedRunningTime="2026-01-22 12:03:02.090137498 +0000 UTC m=+1355.935208568" Jan 22 12:03:13 crc kubenswrapper[4874]: I0122 12:03:13.520200 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:03:13 crc kubenswrapper[4874]: I0122 12:03:13.520998 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:03:43 crc kubenswrapper[4874]: I0122 12:03:43.520542 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:03:43 crc kubenswrapper[4874]: I0122 12:03:43.521116 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:03:43 crc kubenswrapper[4874]: I0122 12:03:43.521159 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:03:43 crc kubenswrapper[4874]: I0122 12:03:43.521691 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75a7548055039130cae1c6ba4be8efe015a8bd6db752337a976c1be606300781"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:03:43 crc kubenswrapper[4874]: I0122 12:03:43.521743 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://75a7548055039130cae1c6ba4be8efe015a8bd6db752337a976c1be606300781" gracePeriod=600 Jan 22 12:03:44 crc kubenswrapper[4874]: I0122 12:03:44.363928 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="75a7548055039130cae1c6ba4be8efe015a8bd6db752337a976c1be606300781" exitCode=0 Jan 22 12:03:44 crc kubenswrapper[4874]: I0122 12:03:44.364048 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"75a7548055039130cae1c6ba4be8efe015a8bd6db752337a976c1be606300781"} Jan 22 12:03:44 crc kubenswrapper[4874]: I0122 12:03:44.364300 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15"} Jan 22 12:03:44 crc kubenswrapper[4874]: I0122 12:03:44.364326 4874 scope.go:117] "RemoveContainer" containerID="8527e378a489c65991df9b62c1d10fce3a020be8e3c0c8d8d62b128fb0466805" Jan 22 12:03:54 crc kubenswrapper[4874]: I0122 12:03:54.436926 4874 generic.go:334] "Generic (PLEG): container finished" podID="0930f68b-3161-43da-80c9-440cf31d98b9" containerID="e02bf90168e7d467aa3f3480ae95da68a8df2550b54ab56b09bd571cb9315ecd" exitCode=0 Jan 22 12:03:54 crc kubenswrapper[4874]: I0122 12:03:54.437068 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0930f68b-3161-43da-80c9-440cf31d98b9","Type":"ContainerDied","Data":"e02bf90168e7d467aa3f3480ae95da68a8df2550b54ab56b09bd571cb9315ecd"} Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.698831 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.844825 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-system-configs\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.844929 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-ca-bundles\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.844959 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-push\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.845695 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.845767 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.845890 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-buildcachedir\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.845924 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz6zp\" (UniqueName: \"kubernetes.io/projected/0930f68b-3161-43da-80c9-440cf31d98b9-kube-api-access-nz6zp\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.845970 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-pull\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.845981 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.846128 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.846380 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-node-pullsecrets\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.846480 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-buildworkdir\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.846505 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-run\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.846553 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-root\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.846583 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-proxy-ca-bundles\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.846607 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-build-blob-cache\") pod \"0930f68b-3161-43da-80c9-440cf31d98b9\" (UID: \"0930f68b-3161-43da-80c9-440cf31d98b9\") " Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.847182 4874 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.847205 4874 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.847219 4874 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.847231 4874 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0930f68b-3161-43da-80c9-440cf31d98b9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.847832 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.847943 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.849759 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.851130 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-push" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-push") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "builder-dockercfg-jmkbp-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.854617 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-pull" (OuterVolumeSpecName: "builder-dockercfg-jmkbp-pull") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "builder-dockercfg-jmkbp-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.868692 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0930f68b-3161-43da-80c9-440cf31d98b9-kube-api-access-nz6zp" (OuterVolumeSpecName: "kube-api-access-nz6zp") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "kube-api-access-nz6zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.948639 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-pull\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.948676 4874 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.948689 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.948703 4874 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0930f68b-3161-43da-80c9-440cf31d98b9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.948714 4874 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jmkbp-push\" (UniqueName: \"kubernetes.io/secret/0930f68b-3161-43da-80c9-440cf31d98b9-builder-dockercfg-jmkbp-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.948727 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz6zp\" (UniqueName: \"kubernetes.io/projected/0930f68b-3161-43da-80c9-440cf31d98b9-kube-api-access-nz6zp\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:55 crc kubenswrapper[4874]: I0122 12:03:55.990047 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:03:56 crc kubenswrapper[4874]: I0122 12:03:56.049845 4874 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:03:56 crc kubenswrapper[4874]: I0122 12:03:56.475680 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"0930f68b-3161-43da-80c9-440cf31d98b9","Type":"ContainerDied","Data":"886d9f0c67a3a4c8d14ceb141fd085df974e37449fd7e5219c23a2101e17a594"} Jan 22 12:03:56 crc kubenswrapper[4874]: I0122 12:03:56.475725 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="886d9f0c67a3a4c8d14ceb141fd085df974e37449fd7e5219c23a2101e17a594" Jan 22 12:03:56 crc kubenswrapper[4874]: I0122 12:03:56.475762 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:03:56 crc kubenswrapper[4874]: I0122 12:03:56.821232 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0930f68b-3161-43da-80c9-440cf31d98b9" (UID: "0930f68b-3161-43da-80c9-440cf31d98b9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:03:56 crc kubenswrapper[4874]: I0122 12:03:56.858793 4874 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0930f68b-3161-43da-80c9-440cf31d98b9-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.661764 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw"] Jan 22 12:04:02 crc kubenswrapper[4874]: E0122 12:04:02.663246 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0930f68b-3161-43da-80c9-440cf31d98b9" containerName="manage-dockerfile" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.663318 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="0930f68b-3161-43da-80c9-440cf31d98b9" containerName="manage-dockerfile" Jan 22 12:04:02 crc kubenswrapper[4874]: E0122 12:04:02.663353 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0930f68b-3161-43da-80c9-440cf31d98b9" containerName="git-clone" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.663485 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="0930f68b-3161-43da-80c9-440cf31d98b9" containerName="git-clone" Jan 22 12:04:02 crc kubenswrapper[4874]: E0122 12:04:02.663574 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0930f68b-3161-43da-80c9-440cf31d98b9" containerName="docker-build" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.663596 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="0930f68b-3161-43da-80c9-440cf31d98b9" containerName="docker-build" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.664064 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="0930f68b-3161-43da-80c9-440cf31d98b9" containerName="docker-build" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.667290 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.670138 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-9gb57" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.670427 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw"] Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.844550 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/f01ecec4-43d6-424e-843b-ef89a732199e-runner\") pod \"smart-gateway-operator-779b7b47dc-4zkhw\" (UID: \"f01ecec4-43d6-424e-843b-ef89a732199e\") " pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.844657 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tl2\" (UniqueName: \"kubernetes.io/projected/f01ecec4-43d6-424e-843b-ef89a732199e-kube-api-access-c2tl2\") pod \"smart-gateway-operator-779b7b47dc-4zkhw\" (UID: \"f01ecec4-43d6-424e-843b-ef89a732199e\") " pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.945745 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/f01ecec4-43d6-424e-843b-ef89a732199e-runner\") pod \"smart-gateway-operator-779b7b47dc-4zkhw\" (UID: \"f01ecec4-43d6-424e-843b-ef89a732199e\") " pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.945824 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2tl2\" (UniqueName: \"kubernetes.io/projected/f01ecec4-43d6-424e-843b-ef89a732199e-kube-api-access-c2tl2\") pod \"smart-gateway-operator-779b7b47dc-4zkhw\" (UID: \"f01ecec4-43d6-424e-843b-ef89a732199e\") " pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.946311 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/f01ecec4-43d6-424e-843b-ef89a732199e-runner\") pod \"smart-gateway-operator-779b7b47dc-4zkhw\" (UID: \"f01ecec4-43d6-424e-843b-ef89a732199e\") " pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.974013 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2tl2\" (UniqueName: \"kubernetes.io/projected/f01ecec4-43d6-424e-843b-ef89a732199e-kube-api-access-c2tl2\") pod \"smart-gateway-operator-779b7b47dc-4zkhw\" (UID: \"f01ecec4-43d6-424e-843b-ef89a732199e\") " pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" Jan 22 12:04:02 crc kubenswrapper[4874]: I0122 12:04:02.991764 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" Jan 22 12:04:03 crc kubenswrapper[4874]: I0122 12:04:03.194927 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw"] Jan 22 12:04:03 crc kubenswrapper[4874]: I0122 12:04:03.202561 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:04:03 crc kubenswrapper[4874]: I0122 12:04:03.520123 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" event={"ID":"f01ecec4-43d6-424e-843b-ef89a732199e","Type":"ContainerStarted","Data":"3cdd36d662ac42074462bc833dfe5a4412991b5028b8df949e8f12fcc10dfb9b"} Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.772666 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-66768bfd46-6v9td"] Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.773876 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.777430 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-hmdt2" Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.790125 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-66768bfd46-6v9td"] Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.819143 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413-runner\") pod \"service-telemetry-operator-66768bfd46-6v9td\" (UID: \"9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413\") " pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.819210 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pccd6\" (UniqueName: \"kubernetes.io/projected/9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413-kube-api-access-pccd6\") pod \"service-telemetry-operator-66768bfd46-6v9td\" (UID: \"9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413\") " pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.921073 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pccd6\" (UniqueName: \"kubernetes.io/projected/9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413-kube-api-access-pccd6\") pod \"service-telemetry-operator-66768bfd46-6v9td\" (UID: \"9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413\") " pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.921179 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413-runner\") pod \"service-telemetry-operator-66768bfd46-6v9td\" (UID: \"9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413\") " pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.921653 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413-runner\") pod \"service-telemetry-operator-66768bfd46-6v9td\" (UID: \"9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413\") " pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" Jan 22 12:04:07 crc kubenswrapper[4874]: I0122 12:04:07.940026 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pccd6\" (UniqueName: \"kubernetes.io/projected/9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413-kube-api-access-pccd6\") pod \"service-telemetry-operator-66768bfd46-6v9td\" (UID: \"9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413\") " pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" Jan 22 12:04:08 crc kubenswrapper[4874]: I0122 12:04:08.095578 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" Jan 22 12:04:13 crc kubenswrapper[4874]: I0122 12:04:13.460750 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-66768bfd46-6v9td"] Jan 22 12:04:15 crc kubenswrapper[4874]: W0122 12:04:15.333207 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c1cf40e_37fa_4e5a_8bb3_8ee8aee9c413.slice/crio-7b271d980e3d4689ef8eb27d1670f7dfb3b1abe89917c0046058f3856f588011 WatchSource:0}: Error finding container 7b271d980e3d4689ef8eb27d1670f7dfb3b1abe89917c0046058f3856f588011: Status 404 returned error can't find the container with id 7b271d980e3d4689ef8eb27d1670f7dfb3b1abe89917c0046058f3856f588011 Jan 22 12:04:15 crc kubenswrapper[4874]: I0122 12:04:15.616088 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" event={"ID":"9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413","Type":"ContainerStarted","Data":"7b271d980e3d4689ef8eb27d1670f7dfb3b1abe89917c0046058f3856f588011"} Jan 22 12:04:18 crc kubenswrapper[4874]: E0122 12:04:18.153013 4874 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Jan 22 12:04:18 crc kubenswrapper[4874]: E0122 12:04:18.153278 4874 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1769083437,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2tl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-779b7b47dc-4zkhw_service-telemetry(f01ecec4-43d6-424e-843b-ef89a732199e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 12:04:18 crc kubenswrapper[4874]: E0122 12:04:18.154634 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" podUID="f01ecec4-43d6-424e-843b-ef89a732199e" Jan 22 12:04:18 crc kubenswrapper[4874]: E0122 12:04:18.649467 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" podUID="f01ecec4-43d6-424e-843b-ef89a732199e" Jan 22 12:04:23 crc kubenswrapper[4874]: I0122 12:04:23.684069 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" event={"ID":"9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413","Type":"ContainerStarted","Data":"e73c375bccc59c8c66d59408ff25a7a58a9e99a99d320c2eff1c3936b98918b9"} Jan 22 12:04:31 crc kubenswrapper[4874]: I0122 12:04:31.739053 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-66768bfd46-6v9td" podStartSLOduration=17.165570992 podStartE2EDuration="24.73903696s" podCreationTimestamp="2026-01-22 12:04:07 +0000 UTC" firstStartedPulling="2026-01-22 12:04:15.336102844 +0000 UTC m=+1429.181173954" lastFinishedPulling="2026-01-22 12:04:22.909568852 +0000 UTC m=+1436.754639922" observedRunningTime="2026-01-22 12:04:23.712453348 +0000 UTC m=+1437.557524498" watchObservedRunningTime="2026-01-22 12:04:31.73903696 +0000 UTC m=+1445.584108020" Jan 22 12:04:32 crc kubenswrapper[4874]: I0122 12:04:32.752104 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" event={"ID":"f01ecec4-43d6-424e-843b-ef89a732199e","Type":"ContainerStarted","Data":"45613c1b63ae05368aca29c1bd76c66756c2ae4f66666668026c782988c44c8d"} Jan 22 12:04:32 crc kubenswrapper[4874]: I0122 12:04:32.782069 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-779b7b47dc-4zkhw" podStartSLOduration=1.8025797940000001 podStartE2EDuration="30.782041338s" podCreationTimestamp="2026-01-22 12:04:02 +0000 UTC" firstStartedPulling="2026-01-22 12:04:03.202203657 +0000 UTC m=+1417.047274737" lastFinishedPulling="2026-01-22 12:04:32.181665211 +0000 UTC m=+1446.026736281" observedRunningTime="2026-01-22 12:04:32.781301605 +0000 UTC m=+1446.626372675" watchObservedRunningTime="2026-01-22 12:04:32.782041338 +0000 UTC m=+1446.627112418" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.471599 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-lbnpc"] Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.473598 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.478354 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.478490 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.478688 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.479651 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.479693 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.481883 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.483269 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-ghwkd" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.483893 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.484011 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.484037 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-config\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.484064 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-users\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.484083 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.484126 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hv7f\" (UniqueName: \"kubernetes.io/projected/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-kube-api-access-5hv7f\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.484180 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.516081 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-lbnpc"] Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.585071 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.585111 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-config\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.585130 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-users\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.585146 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.585172 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hv7f\" (UniqueName: \"kubernetes.io/projected/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-kube-api-access-5hv7f\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.585205 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.585249 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.586351 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-config\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.595173 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.595209 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.595442 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.596915 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-users\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.600828 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hv7f\" (UniqueName: \"kubernetes.io/projected/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-kube-api-access-5hv7f\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.607850 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-lbnpc\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:45 crc kubenswrapper[4874]: I0122 12:04:45.798299 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:04:46 crc kubenswrapper[4874]: I0122 12:04:46.017689 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-lbnpc"] Jan 22 12:04:46 crc kubenswrapper[4874]: I0122 12:04:46.856470 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" event={"ID":"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527","Type":"ContainerStarted","Data":"991fe91ef8468119983543c90e335d348cd43ff832a854998cadb6f267da781f"} Jan 22 12:05:02 crc kubenswrapper[4874]: I0122 12:05:02.075265 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" event={"ID":"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527","Type":"ContainerStarted","Data":"1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8"} Jan 22 12:05:02 crc kubenswrapper[4874]: I0122 12:05:02.092787 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" podStartSLOduration=2.060106955 podStartE2EDuration="17.092770548s" podCreationTimestamp="2026-01-22 12:04:45 +0000 UTC" firstStartedPulling="2026-01-22 12:04:46.026693631 +0000 UTC m=+1459.871764701" lastFinishedPulling="2026-01-22 12:05:01.059357224 +0000 UTC m=+1474.904428294" observedRunningTime="2026-01-22 12:05:02.090504829 +0000 UTC m=+1475.935575919" watchObservedRunningTime="2026-01-22 12:05:02.092770548 +0000 UTC m=+1475.937841618" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.887457 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.888865 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.891634 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.891771 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.891764 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.892009 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-bbm2x" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.893333 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.893372 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.893911 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.893919 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.894034 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.894188 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Jan 22 12:05:03 crc kubenswrapper[4874]: I0122 12:05:03.913873 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052453 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-web-config\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052510 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpdjs\" (UniqueName: \"kubernetes.io/projected/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-kube-api-access-gpdjs\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052530 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052554 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1fec953c-685d-4ed1-bb39-3ba4f9281425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fec953c-685d-4ed1-bb39-3ba4f9281425\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052578 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052600 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-config-out\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052626 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-config\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052650 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052695 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-tls-assets\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052711 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052729 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.052754 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154531 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-tls-assets\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154584 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154609 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154643 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154689 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-web-config\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154710 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpdjs\" (UniqueName: \"kubernetes.io/projected/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-kube-api-access-gpdjs\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154730 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154758 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1fec953c-685d-4ed1-bb39-3ba4f9281425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fec953c-685d-4ed1-bb39-3ba4f9281425\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154784 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154808 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-config-out\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154837 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-config\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.154868 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: E0122 12:05:04.154993 4874 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 22 12:05:04 crc kubenswrapper[4874]: E0122 12:05:04.155054 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-prometheus-proxy-tls podName:7cd2b3c5-32a6-49c1-984c-86aa8fe36f85 nodeName:}" failed. No retries permitted until 2026-01-22 12:05:04.655033446 +0000 UTC m=+1478.500104516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7cd2b3c5-32a6-49c1-984c-86aa8fe36f85") : secret "default-prometheus-proxy-tls" not found Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.156360 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.160960 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.162248 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.165176 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-config-out\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.165177 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.165794 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.167140 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.167176 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1fec953c-685d-4ed1-bb39-3ba4f9281425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fec953c-685d-4ed1-bb39-3ba4f9281425\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/37635c52ffc8d45540ce6f928b3d95e2edf1cbb912893c9ece4ebfa760a97525/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.170031 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-tls-assets\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.170652 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-config\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.180889 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-web-config\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.186556 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpdjs\" (UniqueName: \"kubernetes.io/projected/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-kube-api-access-gpdjs\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.197286 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1fec953c-685d-4ed1-bb39-3ba4f9281425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1fec953c-685d-4ed1-bb39-3ba4f9281425\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: I0122 12:05:04.661194 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:04 crc kubenswrapper[4874]: E0122 12:05:04.661567 4874 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 22 12:05:04 crc kubenswrapper[4874]: E0122 12:05:04.661682 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-prometheus-proxy-tls podName:7cd2b3c5-32a6-49c1-984c-86aa8fe36f85 nodeName:}" failed. No retries permitted until 2026-01-22 12:05:05.661658709 +0000 UTC m=+1479.506729779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7cd2b3c5-32a6-49c1-984c-86aa8fe36f85") : secret "default-prometheus-proxy-tls" not found Jan 22 12:05:05 crc kubenswrapper[4874]: I0122 12:05:05.675618 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:05 crc kubenswrapper[4874]: I0122 12:05:05.680312 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7cd2b3c5-32a6-49c1-984c-86aa8fe36f85-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:05:05 crc kubenswrapper[4874]: I0122 12:05:05.755907 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 22 12:05:06 crc kubenswrapper[4874]: I0122 12:05:06.205739 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 22 12:05:07 crc kubenswrapper[4874]: I0122 12:05:07.112610 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85","Type":"ContainerStarted","Data":"5076195ba1893a2766c22408320072c928f35c9d00d20fa815b8b8aac6394ed7"} Jan 22 12:05:11 crc kubenswrapper[4874]: I0122 12:05:11.142845 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85","Type":"ContainerStarted","Data":"46eae4037523c1a86bc383e7393949e1f079f41fa015304765a32a52db657fe6"} Jan 22 12:05:13 crc kubenswrapper[4874]: I0122 12:05:13.483577 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-9qcrx"] Jan 22 12:05:13 crc kubenswrapper[4874]: I0122 12:05:13.485236 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-9qcrx" Jan 22 12:05:13 crc kubenswrapper[4874]: I0122 12:05:13.493444 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-9qcrx"] Jan 22 12:05:13 crc kubenswrapper[4874]: I0122 12:05:13.600094 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzbgl\" (UniqueName: \"kubernetes.io/projected/a14f7a5d-1819-4ca2-8683-96a338c70df6-kube-api-access-fzbgl\") pod \"default-snmp-webhook-6856cfb745-9qcrx\" (UID: \"a14f7a5d-1819-4ca2-8683-96a338c70df6\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-9qcrx" Jan 22 12:05:13 crc kubenswrapper[4874]: I0122 12:05:13.702220 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzbgl\" (UniqueName: \"kubernetes.io/projected/a14f7a5d-1819-4ca2-8683-96a338c70df6-kube-api-access-fzbgl\") pod \"default-snmp-webhook-6856cfb745-9qcrx\" (UID: \"a14f7a5d-1819-4ca2-8683-96a338c70df6\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-9qcrx" Jan 22 12:05:13 crc kubenswrapper[4874]: I0122 12:05:13.724572 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzbgl\" (UniqueName: \"kubernetes.io/projected/a14f7a5d-1819-4ca2-8683-96a338c70df6-kube-api-access-fzbgl\") pod \"default-snmp-webhook-6856cfb745-9qcrx\" (UID: \"a14f7a5d-1819-4ca2-8683-96a338c70df6\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-9qcrx" Jan 22 12:05:13 crc kubenswrapper[4874]: I0122 12:05:13.803486 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-9qcrx" Jan 22 12:05:14 crc kubenswrapper[4874]: I0122 12:05:14.234497 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-9qcrx"] Jan 22 12:05:15 crc kubenswrapper[4874]: I0122 12:05:15.174135 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-9qcrx" event={"ID":"a14f7a5d-1819-4ca2-8683-96a338c70df6","Type":"ContainerStarted","Data":"f407a60fc8f6ab5d4e4b4e0d71fe4be0fc91e3f5e46fbf4568e32dbadd495ea8"} Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.123072 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.125256 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.128841 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.128897 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.128966 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.129147 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.129223 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-cxk5p" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.129166 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.129509 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.254159 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.254218 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-web-config\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.254242 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-config-volume\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.254261 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.254287 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-adf40950-4a83-4124-a213-cde377d20e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-adf40950-4a83-4124-a213-cde377d20e8e\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.254350 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30722d93-5804-4aba-a6e3-e2b891356163-tls-assets\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.254370 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.254422 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fjb\" (UniqueName: \"kubernetes.io/projected/30722d93-5804-4aba-a6e3-e2b891356163-kube-api-access-z5fjb\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.254450 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30722d93-5804-4aba-a6e3-e2b891356163-config-out\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.355637 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30722d93-5804-4aba-a6e3-e2b891356163-tls-assets\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.355700 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.355758 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5fjb\" (UniqueName: \"kubernetes.io/projected/30722d93-5804-4aba-a6e3-e2b891356163-kube-api-access-z5fjb\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.355804 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30722d93-5804-4aba-a6e3-e2b891356163-config-out\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.355850 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.355885 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-web-config\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.355915 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-config-volume\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.355941 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.355977 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-adf40950-4a83-4124-a213-cde377d20e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-adf40950-4a83-4124-a213-cde377d20e8e\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: E0122 12:05:17.357992 4874 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:05:17 crc kubenswrapper[4874]: E0122 12:05:17.358248 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls podName:30722d93-5804-4aba-a6e3-e2b891356163 nodeName:}" failed. No retries permitted until 2026-01-22 12:05:17.858229398 +0000 UTC m=+1491.703300478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "30722d93-5804-4aba-a6e3-e2b891356163") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.362970 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30722d93-5804-4aba-a6e3-e2b891356163-config-out\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.363274 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-config-volume\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.363368 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-web-config\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.363735 4874 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.363767 4874 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-adf40950-4a83-4124-a213-cde377d20e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-adf40950-4a83-4124-a213-cde377d20e8e\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d7c76d617ad7234da60527edf901d0a9ef34c3c89379835b82ca1c2da10a6ce/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.364313 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.371167 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.375038 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5fjb\" (UniqueName: \"kubernetes.io/projected/30722d93-5804-4aba-a6e3-e2b891356163-kube-api-access-z5fjb\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.376828 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30722d93-5804-4aba-a6e3-e2b891356163-tls-assets\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.412458 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-adf40950-4a83-4124-a213-cde377d20e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-adf40950-4a83-4124-a213-cde377d20e8e\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: E0122 12:05:17.522797 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cd2b3c5_32a6_49c1_984c_86aa8fe36f85.slice/crio-conmon-46eae4037523c1a86bc383e7393949e1f079f41fa015304765a32a52db657fe6.scope\": RecentStats: unable to find data in memory cache]" Jan 22 12:05:17 crc kubenswrapper[4874]: I0122 12:05:17.865738 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:17 crc kubenswrapper[4874]: E0122 12:05:17.865913 4874 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:05:17 crc kubenswrapper[4874]: E0122 12:05:17.865980 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls podName:30722d93-5804-4aba-a6e3-e2b891356163 nodeName:}" failed. No retries permitted until 2026-01-22 12:05:18.865965625 +0000 UTC m=+1492.711036695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "30722d93-5804-4aba-a6e3-e2b891356163") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:05:18 crc kubenswrapper[4874]: I0122 12:05:18.199901 4874 generic.go:334] "Generic (PLEG): container finished" podID="7cd2b3c5-32a6-49c1-984c-86aa8fe36f85" containerID="46eae4037523c1a86bc383e7393949e1f079f41fa015304765a32a52db657fe6" exitCode=0 Jan 22 12:05:18 crc kubenswrapper[4874]: I0122 12:05:18.199992 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85","Type":"ContainerDied","Data":"46eae4037523c1a86bc383e7393949e1f079f41fa015304765a32a52db657fe6"} Jan 22 12:05:18 crc kubenswrapper[4874]: I0122 12:05:18.881182 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:18 crc kubenswrapper[4874]: E0122 12:05:18.881485 4874 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:05:18 crc kubenswrapper[4874]: E0122 12:05:18.881623 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls podName:30722d93-5804-4aba-a6e3-e2b891356163 nodeName:}" failed. No retries permitted until 2026-01-22 12:05:20.881591343 +0000 UTC m=+1494.726662453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "30722d93-5804-4aba-a6e3-e2b891356163") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:05:20 crc kubenswrapper[4874]: I0122 12:05:20.957689 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:20 crc kubenswrapper[4874]: I0122 12:05:20.977583 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/30722d93-5804-4aba-a6e3-e2b891356163-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"30722d93-5804-4aba-a6e3-e2b891356163\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:21 crc kubenswrapper[4874]: I0122 12:05:21.043257 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 22 12:05:23 crc kubenswrapper[4874]: I0122 12:05:23.384089 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 22 12:05:24 crc kubenswrapper[4874]: I0122 12:05:24.245291 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"30722d93-5804-4aba-a6e3-e2b891356163","Type":"ContainerStarted","Data":"ec150f9482f77e1f4afd4ad1558d86ac0e170cb8e3de262b1ba2df809cf9e66d"} Jan 22 12:05:24 crc kubenswrapper[4874]: I0122 12:05:24.247289 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-9qcrx" event={"ID":"a14f7a5d-1819-4ca2-8683-96a338c70df6","Type":"ContainerStarted","Data":"791b86629f1493b42f6ca4eeb5ea835f85573ce790df81d91166f2f18796dbdb"} Jan 22 12:05:24 crc kubenswrapper[4874]: I0122 12:05:24.260629 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-9qcrx" podStartSLOduration=2.271680537 podStartE2EDuration="11.260605771s" podCreationTimestamp="2026-01-22 12:05:13 +0000 UTC" firstStartedPulling="2026-01-22 12:05:14.246798024 +0000 UTC m=+1488.091869104" lastFinishedPulling="2026-01-22 12:05:23.235723268 +0000 UTC m=+1497.080794338" observedRunningTime="2026-01-22 12:05:24.259481476 +0000 UTC m=+1498.104552556" watchObservedRunningTime="2026-01-22 12:05:24.260605771 +0000 UTC m=+1498.105676851" Jan 22 12:05:26 crc kubenswrapper[4874]: I0122 12:05:26.266470 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"30722d93-5804-4aba-a6e3-e2b891356163","Type":"ContainerStarted","Data":"7ac51e4a91f65653b5b9e0030d22b5ffb622d7862ef9dad4a669c63b8bfcd8b2"} Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.190387 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg"] Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.193539 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.195633 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.197771 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-jzxhb" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.198239 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.202765 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg"] Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.203106 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.312475 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85","Type":"ContainerStarted","Data":"19ec2555e9c2cfd5b72886151b286bbd34c73b723b47e1ce79a93b2d5ff57b3d"} Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.313904 4874 generic.go:334] "Generic (PLEG): container finished" podID="30722d93-5804-4aba-a6e3-e2b891356163" containerID="7ac51e4a91f65653b5b9e0030d22b5ffb622d7862ef9dad4a669c63b8bfcd8b2" exitCode=0 Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.313954 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"30722d93-5804-4aba-a6e3-e2b891356163","Type":"ContainerDied","Data":"7ac51e4a91f65653b5b9e0030d22b5ffb622d7862ef9dad4a669c63b8bfcd8b2"} Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.346325 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zbzw\" (UniqueName: \"kubernetes.io/projected/16eb2f20-a170-4504-8601-a08d7174edef-kube-api-access-9zbzw\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.346449 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/16eb2f20-a170-4504-8601-a08d7174edef-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.346503 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.346546 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/16eb2f20-a170-4504-8601-a08d7174edef-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.346574 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.448434 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zbzw\" (UniqueName: \"kubernetes.io/projected/16eb2f20-a170-4504-8601-a08d7174edef-kube-api-access-9zbzw\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.448600 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/16eb2f20-a170-4504-8601-a08d7174edef-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.448657 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.448697 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/16eb2f20-a170-4504-8601-a08d7174edef-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.448721 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: E0122 12:05:33.450559 4874 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:05:33 crc kubenswrapper[4874]: E0122 12:05:33.450642 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-default-cloud1-coll-meter-proxy-tls podName:16eb2f20-a170-4504-8601-a08d7174edef nodeName:}" failed. No retries permitted until 2026-01-22 12:05:33.950617371 +0000 UTC m=+1507.795688561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" (UID: "16eb2f20-a170-4504-8601-a08d7174edef") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.451271 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/16eb2f20-a170-4504-8601-a08d7174edef-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.451959 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/16eb2f20-a170-4504-8601-a08d7174edef-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.457143 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.467034 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zbzw\" (UniqueName: \"kubernetes.io/projected/16eb2f20-a170-4504-8601-a08d7174edef-kube-api-access-9zbzw\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: I0122 12:05:33.959116 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:33 crc kubenswrapper[4874]: E0122 12:05:33.959322 4874 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:05:33 crc kubenswrapper[4874]: E0122 12:05:33.959644 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-default-cloud1-coll-meter-proxy-tls podName:16eb2f20-a170-4504-8601-a08d7174edef nodeName:}" failed. No retries permitted until 2026-01-22 12:05:34.959626147 +0000 UTC m=+1508.804697217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" (UID: "16eb2f20-a170-4504-8601-a08d7174edef") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.759553 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np"] Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.760976 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.762979 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.763379 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.773662 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np"] Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.871900 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1cca42d2-5390-4dc0-90fd-6abef972f828-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.871983 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1cca42d2-5390-4dc0-90fd-6abef972f828-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.872009 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blszq\" (UniqueName: \"kubernetes.io/projected/1cca42d2-5390-4dc0-90fd-6abef972f828-kube-api-access-blszq\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.872036 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.872074 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.973666 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blszq\" (UniqueName: \"kubernetes.io/projected/1cca42d2-5390-4dc0-90fd-6abef972f828-kube-api-access-blszq\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.973719 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.973756 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.973811 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1cca42d2-5390-4dc0-90fd-6abef972f828-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.973844 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.973867 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1cca42d2-5390-4dc0-90fd-6abef972f828-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: E0122 12:05:34.973919 4874 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:05:34 crc kubenswrapper[4874]: E0122 12:05:34.974033 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-default-cloud1-ceil-meter-proxy-tls podName:1cca42d2-5390-4dc0-90fd-6abef972f828 nodeName:}" failed. No retries permitted until 2026-01-22 12:05:35.474010128 +0000 UTC m=+1509.319081258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" (UID: "1cca42d2-5390-4dc0-90fd-6abef972f828") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.974444 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1cca42d2-5390-4dc0-90fd-6abef972f828-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.976055 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1cca42d2-5390-4dc0-90fd-6abef972f828-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.980014 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.990311 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/16eb2f20-a170-4504-8601-a08d7174edef-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg\" (UID: \"16eb2f20-a170-4504-8601-a08d7174edef\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:34 crc kubenswrapper[4874]: I0122 12:05:34.990626 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blszq\" (UniqueName: \"kubernetes.io/projected/1cca42d2-5390-4dc0-90fd-6abef972f828-kube-api-access-blszq\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:35 crc kubenswrapper[4874]: I0122 12:05:35.010247 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" Jan 22 12:05:35 crc kubenswrapper[4874]: I0122 12:05:35.330147 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85","Type":"ContainerStarted","Data":"c332ddf8b4c4c5266dfde74289d4bb221b6529600fba655a0d5bd085cb49f109"} Jan 22 12:05:35 crc kubenswrapper[4874]: I0122 12:05:35.480337 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:35 crc kubenswrapper[4874]: E0122 12:05:35.480592 4874 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:05:35 crc kubenswrapper[4874]: E0122 12:05:35.480663 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-default-cloud1-ceil-meter-proxy-tls podName:1cca42d2-5390-4dc0-90fd-6abef972f828 nodeName:}" failed. No retries permitted until 2026-01-22 12:05:36.480648511 +0000 UTC m=+1510.325719581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" (UID: "1cca42d2-5390-4dc0-90fd-6abef972f828") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:05:35 crc kubenswrapper[4874]: I0122 12:05:35.745456 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg"] Jan 22 12:05:35 crc kubenswrapper[4874]: W0122 12:05:35.754991 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16eb2f20_a170_4504_8601_a08d7174edef.slice/crio-6faf6bdfd8162b5f04b23a8a351b6dc2e9b82adf41e742fc72f0f9ec7dfb2565 WatchSource:0}: Error finding container 6faf6bdfd8162b5f04b23a8a351b6dc2e9b82adf41e742fc72f0f9ec7dfb2565: Status 404 returned error can't find the container with id 6faf6bdfd8162b5f04b23a8a351b6dc2e9b82adf41e742fc72f0f9ec7dfb2565 Jan 22 12:05:36 crc kubenswrapper[4874]: I0122 12:05:36.340609 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"30722d93-5804-4aba-a6e3-e2b891356163","Type":"ContainerStarted","Data":"b34999e847ede69071117228fd59d6a319bb4137fa841753278e8c7325cd0548"} Jan 22 12:05:36 crc kubenswrapper[4874]: I0122 12:05:36.342544 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" event={"ID":"16eb2f20-a170-4504-8601-a08d7174edef","Type":"ContainerStarted","Data":"6faf6bdfd8162b5f04b23a8a351b6dc2e9b82adf41e742fc72f0f9ec7dfb2565"} Jan 22 12:05:36 crc kubenswrapper[4874]: I0122 12:05:36.499302 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:36 crc kubenswrapper[4874]: I0122 12:05:36.526229 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1cca42d2-5390-4dc0-90fd-6abef972f828-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np\" (UID: \"1cca42d2-5390-4dc0-90fd-6abef972f828\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:36 crc kubenswrapper[4874]: I0122 12:05:36.638859 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" Jan 22 12:05:37 crc kubenswrapper[4874]: I0122 12:05:37.091346 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np"] Jan 22 12:05:38 crc kubenswrapper[4874]: I0122 12:05:38.359043 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" event={"ID":"1cca42d2-5390-4dc0-90fd-6abef972f828","Type":"ContainerStarted","Data":"7568bf2ee43f93d9bedbe3888875a2a9b8f779219dcb356fa148ce2b8472a51f"} Jan 22 12:05:38 crc kubenswrapper[4874]: I0122 12:05:38.362713 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"30722d93-5804-4aba-a6e3-e2b891356163","Type":"ContainerStarted","Data":"aa5ea25f32a76dff4abbac5521051de55d0791e87c40f78904cd5d5e3cc27020"} Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.373262 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf"] Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.375103 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.377033 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.377239 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.386366 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf"] Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.487329 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.487392 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqt4\" (UniqueName: \"kubernetes.io/projected/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-kube-api-access-9gqt4\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.487427 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.487575 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.487723 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.590187 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.590287 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.590326 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gqt4\" (UniqueName: \"kubernetes.io/projected/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-kube-api-access-9gqt4\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.590349 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.590372 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.590775 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: E0122 12:05:41.590861 4874 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 22 12:05:41 crc kubenswrapper[4874]: E0122 12:05:41.590910 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-default-cloud1-sens-meter-proxy-tls podName:7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2 nodeName:}" failed. No retries permitted until 2026-01-22 12:05:42.090894827 +0000 UTC m=+1515.935965897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" (UID: "7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.595619 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.600312 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:41 crc kubenswrapper[4874]: I0122 12:05:41.609318 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gqt4\" (UniqueName: \"kubernetes.io/projected/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-kube-api-access-9gqt4\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:42 crc kubenswrapper[4874]: I0122 12:05:42.104710 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:42 crc kubenswrapper[4874]: E0122 12:05:42.104871 4874 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 22 12:05:42 crc kubenswrapper[4874]: E0122 12:05:42.104943 4874 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-default-cloud1-sens-meter-proxy-tls podName:7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2 nodeName:}" failed. No retries permitted until 2026-01-22 12:05:43.104923147 +0000 UTC m=+1516.949994217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" (UID: "7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.116977 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.131023 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf\" (UID: \"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.220829 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.422867 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7cd2b3c5-32a6-49c1-984c-86aa8fe36f85","Type":"ContainerStarted","Data":"f4f8e2b71fca6a5ae2ec4b19e5944e57282e21c8ee6397e3f04a26746c9e3e6c"} Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.424847 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" event={"ID":"16eb2f20-a170-4504-8601-a08d7174edef","Type":"ContainerStarted","Data":"bf6822cb867a1bc23e35dd1ff5500debdbf8238e8a8d1db6bcf22724dfff1ed1"} Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.426586 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" event={"ID":"1cca42d2-5390-4dc0-90fd-6abef972f828","Type":"ContainerStarted","Data":"b6d7d181d3e7422af148da1ee32a5cbfb5c7d8afd452164cfdc65eba9cf1f192"} Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.431448 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"30722d93-5804-4aba-a6e3-e2b891356163","Type":"ContainerStarted","Data":"47c2747d7fd53f2fd0cd892766f3e53d7a763817adfc97900b0db150b45a8c99"} Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.449391 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.034048635 podStartE2EDuration="41.449379757s" podCreationTimestamp="2026-01-22 12:05:02 +0000 UTC" firstStartedPulling="2026-01-22 12:05:06.212427755 +0000 UTC m=+1480.057498825" lastFinishedPulling="2026-01-22 12:05:42.627758877 +0000 UTC m=+1516.472829947" observedRunningTime="2026-01-22 12:05:43.444294711 +0000 UTC m=+1517.289365781" watchObservedRunningTime="2026-01-22 12:05:43.449379757 +0000 UTC m=+1517.294450827" Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.476694 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=18.166342296 podStartE2EDuration="27.476674074s" podCreationTimestamp="2026-01-22 12:05:16 +0000 UTC" firstStartedPulling="2026-01-22 12:05:33.31651269 +0000 UTC m=+1507.161583780" lastFinishedPulling="2026-01-22 12:05:42.626844488 +0000 UTC m=+1516.471915558" observedRunningTime="2026-01-22 12:05:43.472444774 +0000 UTC m=+1517.317515834" watchObservedRunningTime="2026-01-22 12:05:43.476674074 +0000 UTC m=+1517.321745144" Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.520813 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:05:43 crc kubenswrapper[4874]: I0122 12:05:43.521071 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:05:44 crc kubenswrapper[4874]: I0122 12:05:44.052385 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf"] Jan 22 12:05:44 crc kubenswrapper[4874]: W0122 12:05:44.077915 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c0b7de5_b0a4_4c4d_93c3_1feba19a95a2.slice/crio-df982c53fcb5f37bf927450c8ea79cec679ef0f5cf6513841079b6ef4e4578e2 WatchSource:0}: Error finding container df982c53fcb5f37bf927450c8ea79cec679ef0f5cf6513841079b6ef4e4578e2: Status 404 returned error can't find the container with id df982c53fcb5f37bf927450c8ea79cec679ef0f5cf6513841079b6ef4e4578e2 Jan 22 12:05:44 crc kubenswrapper[4874]: I0122 12:05:44.441294 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" event={"ID":"1cca42d2-5390-4dc0-90fd-6abef972f828","Type":"ContainerStarted","Data":"0b8c4a0b0ba7c449f8d752c598d619f6497f68d22e47bcf468a644699c35b9f8"} Jan 22 12:05:44 crc kubenswrapper[4874]: I0122 12:05:44.442081 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" event={"ID":"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2","Type":"ContainerStarted","Data":"df982c53fcb5f37bf927450c8ea79cec679ef0f5cf6513841079b6ef4e4578e2"} Jan 22 12:05:44 crc kubenswrapper[4874]: I0122 12:05:44.446724 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" event={"ID":"16eb2f20-a170-4504-8601-a08d7174edef","Type":"ContainerStarted","Data":"d00586b99f40c23c83e98d796f708a92dd64728a1c6fc574bfa7124a09972b2f"} Jan 22 12:05:45 crc kubenswrapper[4874]: I0122 12:05:45.462044 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" event={"ID":"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2","Type":"ContainerStarted","Data":"461c8f650e5a8a977666075e1277e3c9267495a8c7840f9acdf0c5d6a6800f08"} Jan 22 12:05:45 crc kubenswrapper[4874]: I0122 12:05:45.462313 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" event={"ID":"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2","Type":"ContainerStarted","Data":"cc0d66438daa34f1ff09fdd821cd6a828285cdf4e2c301e2734ce23de7c377f5"} Jan 22 12:05:45 crc kubenswrapper[4874]: I0122 12:05:45.756593 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.037363 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7"] Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.039428 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.044194 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.044473 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.059243 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7"] Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.116546 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8df9n\" (UniqueName: \"kubernetes.io/projected/79746e6a-bee0-42eb-840c-36075d04abde-kube-api-access-8df9n\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.116600 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/79746e6a-bee0-42eb-840c-36075d04abde-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.116627 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/79746e6a-bee0-42eb-840c-36075d04abde-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.116695 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/79746e6a-bee0-42eb-840c-36075d04abde-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.217435 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/79746e6a-bee0-42eb-840c-36075d04abde-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.217502 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8df9n\" (UniqueName: \"kubernetes.io/projected/79746e6a-bee0-42eb-840c-36075d04abde-kube-api-access-8df9n\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.217528 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/79746e6a-bee0-42eb-840c-36075d04abde-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.217546 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/79746e6a-bee0-42eb-840c-36075d04abde-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.218391 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/79746e6a-bee0-42eb-840c-36075d04abde-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.218682 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/79746e6a-bee0-42eb-840c-36075d04abde-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.227214 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/79746e6a-bee0-42eb-840c-36075d04abde-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.232440 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8df9n\" (UniqueName: \"kubernetes.io/projected/79746e6a-bee0-42eb-840c-36075d04abde-kube-api-access-8df9n\") pod \"default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7\" (UID: \"79746e6a-bee0-42eb-840c-36075d04abde\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:48 crc kubenswrapper[4874]: I0122 12:05:48.365557 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.069161 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9"] Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.070739 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.074515 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.081202 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9"] Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.234169 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsdz\" (UniqueName: \"kubernetes.io/projected/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-kube-api-access-fwsdz\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.234230 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.234277 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.234319 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.245232 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7"] Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.335874 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.335984 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwsdz\" (UniqueName: \"kubernetes.io/projected/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-kube-api-access-fwsdz\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.336061 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.336166 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.336454 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.337810 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.341624 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.358353 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwsdz\" (UniqueName: \"kubernetes.io/projected/01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3-kube-api-access-fwsdz\") pod \"default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9\" (UID: \"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.395765 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.492763 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" event={"ID":"16eb2f20-a170-4504-8601-a08d7174edef","Type":"ContainerStarted","Data":"69c433e99a725517200fb9d45ed6e81da3887cc6894fd645f13ecc0bb6daffba"} Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.497177 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" event={"ID":"1cca42d2-5390-4dc0-90fd-6abef972f828","Type":"ContainerStarted","Data":"f83992e9db1ba9c89ed7185fc4772855270eace8775a5e26f14083ad85ef4497"} Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.499733 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" event={"ID":"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2","Type":"ContainerStarted","Data":"0fc95eff3e5747b70e69ba8740b6d2f7899fec0e29b7a5271a2cd615230c1863"} Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.503156 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" event={"ID":"79746e6a-bee0-42eb-840c-36075d04abde","Type":"ContainerStarted","Data":"57ae4c13120d233f1b05b1010c95ceb5a571f302e4e0502b5e01d31027df9bca"} Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.525927 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" podStartSLOduration=3.384248665 podStartE2EDuration="16.52590654s" podCreationTimestamp="2026-01-22 12:05:33 +0000 UTC" firstStartedPulling="2026-01-22 12:05:35.758025475 +0000 UTC m=+1509.603096535" lastFinishedPulling="2026-01-22 12:05:48.89968333 +0000 UTC m=+1522.744754410" observedRunningTime="2026-01-22 12:05:49.515441779 +0000 UTC m=+1523.360512849" watchObservedRunningTime="2026-01-22 12:05:49.52590654 +0000 UTC m=+1523.370977610" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.546002 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" podStartSLOduration=4.10197025 podStartE2EDuration="15.545984045s" podCreationTimestamp="2026-01-22 12:05:34 +0000 UTC" firstStartedPulling="2026-01-22 12:05:37.507490743 +0000 UTC m=+1511.352561813" lastFinishedPulling="2026-01-22 12:05:48.951504538 +0000 UTC m=+1522.796575608" observedRunningTime="2026-01-22 12:05:49.544705606 +0000 UTC m=+1523.389776676" watchObservedRunningTime="2026-01-22 12:05:49.545984045 +0000 UTC m=+1523.391055115" Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.572290 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" podStartSLOduration=3.736024075 podStartE2EDuration="8.572269722s" podCreationTimestamp="2026-01-22 12:05:41 +0000 UTC" firstStartedPulling="2026-01-22 12:05:44.082505408 +0000 UTC m=+1517.927576478" lastFinishedPulling="2026-01-22 12:05:48.918751055 +0000 UTC m=+1522.763822125" observedRunningTime="2026-01-22 12:05:49.564254875 +0000 UTC m=+1523.409325945" watchObservedRunningTime="2026-01-22 12:05:49.572269722 +0000 UTC m=+1523.417340792" Jan 22 12:05:49 crc kubenswrapper[4874]: W0122 12:05:49.847964 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01e2e1fb_0e3b_4c3b_b5c5_df459233c6e3.slice/crio-d5d4378713e394dc5a7268b56220f34dbea522db178c42678d9c82528beccdb1 WatchSource:0}: Error finding container d5d4378713e394dc5a7268b56220f34dbea522db178c42678d9c82528beccdb1: Status 404 returned error can't find the container with id d5d4378713e394dc5a7268b56220f34dbea522db178c42678d9c82528beccdb1 Jan 22 12:05:49 crc kubenswrapper[4874]: I0122 12:05:49.850801 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9"] Jan 22 12:05:50 crc kubenswrapper[4874]: I0122 12:05:50.511592 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" event={"ID":"79746e6a-bee0-42eb-840c-36075d04abde","Type":"ContainerStarted","Data":"09cb582dc76bac620de55013695a6049eb521a343c611714eaf77b7038c60d7e"} Jan 22 12:05:50 crc kubenswrapper[4874]: I0122 12:05:50.512856 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" event={"ID":"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3","Type":"ContainerStarted","Data":"d5d4378713e394dc5a7268b56220f34dbea522db178c42678d9c82528beccdb1"} Jan 22 12:05:51 crc kubenswrapper[4874]: I0122 12:05:51.021877 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 22 12:05:51 crc kubenswrapper[4874]: I0122 12:05:51.055902 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 22 12:05:51 crc kubenswrapper[4874]: I0122 12:05:51.520020 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" event={"ID":"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3","Type":"ContainerStarted","Data":"0451bbcc35433072c924780c0fd1a6e8ba140eace530b10301001354bb9c5c0f"} Jan 22 12:05:51 crc kubenswrapper[4874]: I0122 12:05:51.521796 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" event={"ID":"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3","Type":"ContainerStarted","Data":"514e40ce0ac94f7f5ee03f53f502f032cb0df43f43b4da1487e85d3afe4ac50a"} Jan 22 12:05:51 crc kubenswrapper[4874]: I0122 12:05:51.521855 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" event={"ID":"79746e6a-bee0-42eb-840c-36075d04abde","Type":"ContainerStarted","Data":"58c31f117d1242b5994f1205597c4e1e95231093a0ea8abb96f2a73941f5bccc"} Jan 22 12:05:51 crc kubenswrapper[4874]: I0122 12:05:51.541608 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" podStartSLOduration=1.102316962 podStartE2EDuration="2.541105054s" podCreationTimestamp="2026-01-22 12:05:49 +0000 UTC" firstStartedPulling="2026-01-22 12:05:49.852981217 +0000 UTC m=+1523.698052297" lastFinishedPulling="2026-01-22 12:05:51.291769319 +0000 UTC m=+1525.136840389" observedRunningTime="2026-01-22 12:05:51.537947557 +0000 UTC m=+1525.383018657" watchObservedRunningTime="2026-01-22 12:05:51.541105054 +0000 UTC m=+1525.386176124" Jan 22 12:05:51 crc kubenswrapper[4874]: I0122 12:05:51.565104 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" podStartSLOduration=2.446574867 podStartE2EDuration="3.56508416s" podCreationTimestamp="2026-01-22 12:05:48 +0000 UTC" firstStartedPulling="2026-01-22 12:05:49.238023943 +0000 UTC m=+1523.083095013" lastFinishedPulling="2026-01-22 12:05:50.356533236 +0000 UTC m=+1524.201604306" observedRunningTime="2026-01-22 12:05:51.5543129 +0000 UTC m=+1525.399383990" watchObservedRunningTime="2026-01-22 12:05:51.56508416 +0000 UTC m=+1525.410155230" Jan 22 12:05:51 crc kubenswrapper[4874]: I0122 12:05:51.593785 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 22 12:06:04 crc kubenswrapper[4874]: I0122 12:06:04.891186 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 22 12:06:04 crc kubenswrapper[4874]: I0122 12:06:04.893841 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 22 12:06:04 crc kubenswrapper[4874]: I0122 12:06:04.896772 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Jan 22 12:06:04 crc kubenswrapper[4874]: I0122 12:06:04.896953 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Jan 22 12:06:04 crc kubenswrapper[4874]: I0122 12:06:04.905941 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.016664 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/056ff179-9e03-4f48-b3d3-fa995d0c4093-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"056ff179-9e03-4f48-b3d3-fa995d0c4093\") " pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.016757 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/056ff179-9e03-4f48-b3d3-fa995d0c4093-qdr-test-config\") pod \"qdr-test\" (UID: \"056ff179-9e03-4f48-b3d3-fa995d0c4093\") " pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.016834 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjrpk\" (UniqueName: \"kubernetes.io/projected/056ff179-9e03-4f48-b3d3-fa995d0c4093-kube-api-access-rjrpk\") pod \"qdr-test\" (UID: \"056ff179-9e03-4f48-b3d3-fa995d0c4093\") " pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.117902 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/056ff179-9e03-4f48-b3d3-fa995d0c4093-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"056ff179-9e03-4f48-b3d3-fa995d0c4093\") " pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.117958 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/056ff179-9e03-4f48-b3d3-fa995d0c4093-qdr-test-config\") pod \"qdr-test\" (UID: \"056ff179-9e03-4f48-b3d3-fa995d0c4093\") " pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.118017 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjrpk\" (UniqueName: \"kubernetes.io/projected/056ff179-9e03-4f48-b3d3-fa995d0c4093-kube-api-access-rjrpk\") pod \"qdr-test\" (UID: \"056ff179-9e03-4f48-b3d3-fa995d0c4093\") " pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.118863 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/056ff179-9e03-4f48-b3d3-fa995d0c4093-qdr-test-config\") pod \"qdr-test\" (UID: \"056ff179-9e03-4f48-b3d3-fa995d0c4093\") " pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.130611 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/056ff179-9e03-4f48-b3d3-fa995d0c4093-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"056ff179-9e03-4f48-b3d3-fa995d0c4093\") " pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.140136 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjrpk\" (UniqueName: \"kubernetes.io/projected/056ff179-9e03-4f48-b3d3-fa995d0c4093-kube-api-access-rjrpk\") pod \"qdr-test\" (UID: \"056ff179-9e03-4f48-b3d3-fa995d0c4093\") " pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.215100 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.702879 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.912315 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-lbnpc"] Jan 22 12:06:05 crc kubenswrapper[4874]: I0122 12:06:05.912643 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" podUID="c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" containerName="default-interconnect" containerID="cri-o://1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8" gracePeriod=30 Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.260799 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.438764 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hv7f\" (UniqueName: \"kubernetes.io/projected/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-kube-api-access-5hv7f\") pod \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.438812 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-users\") pod \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.438902 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-config\") pod \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.438965 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-credentials\") pod \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.438992 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-ca\") pod \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.439542 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" (UID: "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.439689 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-ca\") pod \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.439772 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-credentials\") pod \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\" (UID: \"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527\") " Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.440079 4874 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.444979 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-kube-api-access-5hv7f" (OuterVolumeSpecName: "kube-api-access-5hv7f") pod "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" (UID: "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527"). InnerVolumeSpecName "kube-api-access-5hv7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.445517 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" (UID: "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.445541 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" (UID: "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.445610 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" (UID: "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.445749 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" (UID: "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.461649 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" (UID: "c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.541626 4874 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.541696 4874 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.541726 4874 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.541753 4874 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.541780 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hv7f\" (UniqueName: \"kubernetes.io/projected/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-kube-api-access-5hv7f\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.541805 4874 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.624526 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"056ff179-9e03-4f48-b3d3-fa995d0c4093","Type":"ContainerStarted","Data":"bd28960a826427f4fcb49d523ac7fbd479766cbb2296e3ca24891bfcf1c6705e"} Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.627158 4874 generic.go:334] "Generic (PLEG): container finished" podID="01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3" containerID="514e40ce0ac94f7f5ee03f53f502f032cb0df43f43b4da1487e85d3afe4ac50a" exitCode=0 Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.627216 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" event={"ID":"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3","Type":"ContainerDied","Data":"514e40ce0ac94f7f5ee03f53f502f032cb0df43f43b4da1487e85d3afe4ac50a"} Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.627673 4874 scope.go:117] "RemoveContainer" containerID="514e40ce0ac94f7f5ee03f53f502f032cb0df43f43b4da1487e85d3afe4ac50a" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.642341 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2" containerID="461c8f650e5a8a977666075e1277e3c9267495a8c7840f9acdf0c5d6a6800f08" exitCode=0 Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.642438 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" event={"ID":"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2","Type":"ContainerDied","Data":"461c8f650e5a8a977666075e1277e3c9267495a8c7840f9acdf0c5d6a6800f08"} Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.643024 4874 scope.go:117] "RemoveContainer" containerID="461c8f650e5a8a977666075e1277e3c9267495a8c7840f9acdf0c5d6a6800f08" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.662669 4874 generic.go:334] "Generic (PLEG): container finished" podID="79746e6a-bee0-42eb-840c-36075d04abde" containerID="09cb582dc76bac620de55013695a6049eb521a343c611714eaf77b7038c60d7e" exitCode=0 Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.662766 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" event={"ID":"79746e6a-bee0-42eb-840c-36075d04abde","Type":"ContainerDied","Data":"09cb582dc76bac620de55013695a6049eb521a343c611714eaf77b7038c60d7e"} Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.663358 4874 scope.go:117] "RemoveContainer" containerID="09cb582dc76bac620de55013695a6049eb521a343c611714eaf77b7038c60d7e" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.667213 4874 generic.go:334] "Generic (PLEG): container finished" podID="c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" containerID="1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8" exitCode=0 Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.667257 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.667257 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" event={"ID":"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527","Type":"ContainerDied","Data":"1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8"} Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.667375 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-lbnpc" event={"ID":"c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527","Type":"ContainerDied","Data":"991fe91ef8468119983543c90e335d348cd43ff832a854998cadb6f267da781f"} Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.667419 4874 scope.go:117] "RemoveContainer" containerID="1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.671990 4874 generic.go:334] "Generic (PLEG): container finished" podID="16eb2f20-a170-4504-8601-a08d7174edef" containerID="d00586b99f40c23c83e98d796f708a92dd64728a1c6fc574bfa7124a09972b2f" exitCode=0 Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.672041 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" event={"ID":"16eb2f20-a170-4504-8601-a08d7174edef","Type":"ContainerDied","Data":"d00586b99f40c23c83e98d796f708a92dd64728a1c6fc574bfa7124a09972b2f"} Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.672714 4874 scope.go:117] "RemoveContainer" containerID="d00586b99f40c23c83e98d796f708a92dd64728a1c6fc574bfa7124a09972b2f" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.707429 4874 scope.go:117] "RemoveContainer" containerID="1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8" Jan 22 12:06:06 crc kubenswrapper[4874]: E0122 12:06:06.707878 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8\": container with ID starting with 1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8 not found: ID does not exist" containerID="1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.707923 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8"} err="failed to get container status \"1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8\": rpc error: code = NotFound desc = could not find container \"1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8\": container with ID starting with 1942e90d87d0c8ebb015abda0e958e1d2138e35879200aafa6962b056430edd8 not found: ID does not exist" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.768967 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-lbnpc"] Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.777035 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-lbnpc"] Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.864184 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vq2h8"] Jan 22 12:06:06 crc kubenswrapper[4874]: E0122 12:06:06.864510 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" containerName="default-interconnect" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.864525 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" containerName="default-interconnect" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.864692 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" containerName="default-interconnect" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.865550 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:06 crc kubenswrapper[4874]: I0122 12:06:06.872154 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vq2h8"] Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.007065 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-jcvt5"] Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.007957 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.012545 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-ghwkd" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.012709 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.012879 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.016740 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.017353 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.017507 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.017649 4874 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.025129 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-jcvt5"] Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.053536 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj82v\" (UniqueName: \"kubernetes.io/projected/be315a0a-e88a-4a39-b705-b826a4285fc2-kube-api-access-cj82v\") pod \"community-operators-vq2h8\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.053603 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-catalog-content\") pod \"community-operators-vq2h8\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.053734 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-utilities\") pod \"community-operators-vq2h8\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155035 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/da6dd65e-8b70-41a1-8a38-83392c65aef3-sasl-config\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155084 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-catalog-content\") pod \"community-operators-vq2h8\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155108 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-sasl-users\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155136 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155166 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tjnb\" (UniqueName: \"kubernetes.io/projected/da6dd65e-8b70-41a1-8a38-83392c65aef3-kube-api-access-6tjnb\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155197 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155218 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-utilities\") pod \"community-operators-vq2h8\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155253 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155271 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj82v\" (UniqueName: \"kubernetes.io/projected/be315a0a-e88a-4a39-b705-b826a4285fc2-kube-api-access-cj82v\") pod \"community-operators-vq2h8\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155301 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155715 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-catalog-content\") pod \"community-operators-vq2h8\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.155939 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-utilities\") pod \"community-operators-vq2h8\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.174683 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj82v\" (UniqueName: \"kubernetes.io/projected/be315a0a-e88a-4a39-b705-b826a4285fc2-kube-api-access-cj82v\") pod \"community-operators-vq2h8\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.223945 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.256106 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.256151 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/da6dd65e-8b70-41a1-8a38-83392c65aef3-sasl-config\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.256173 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-sasl-users\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.256196 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.256230 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tjnb\" (UniqueName: \"kubernetes.io/projected/da6dd65e-8b70-41a1-8a38-83392c65aef3-kube-api-access-6tjnb\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.256268 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.256309 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.258219 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/da6dd65e-8b70-41a1-8a38-83392c65aef3-sasl-config\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.259193 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-sasl-users\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.262936 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.276006 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.276046 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.293090 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/da6dd65e-8b70-41a1-8a38-83392c65aef3-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.293127 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tjnb\" (UniqueName: \"kubernetes.io/projected/da6dd65e-8b70-41a1-8a38-83392c65aef3-kube-api-access-6tjnb\") pod \"default-interconnect-68864d46cb-jcvt5\" (UID: \"da6dd65e-8b70-41a1-8a38-83392c65aef3\") " pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.342048 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.686232 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" event={"ID":"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3","Type":"ContainerStarted","Data":"699a19bb9b06b8eb15d6f8ce1695ec22e8bc5543fd8a9c4aaee428742ce39ca8"} Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.698208 4874 generic.go:334] "Generic (PLEG): container finished" podID="1cca42d2-5390-4dc0-90fd-6abef972f828" containerID="0b8c4a0b0ba7c449f8d752c598d619f6497f68d22e47bcf468a644699c35b9f8" exitCode=0 Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.698368 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" event={"ID":"1cca42d2-5390-4dc0-90fd-6abef972f828","Type":"ContainerDied","Data":"0b8c4a0b0ba7c449f8d752c598d619f6497f68d22e47bcf468a644699c35b9f8"} Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.699346 4874 scope.go:117] "RemoveContainer" containerID="0b8c4a0b0ba7c449f8d752c598d619f6497f68d22e47bcf468a644699c35b9f8" Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.703802 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" event={"ID":"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2","Type":"ContainerStarted","Data":"8871bcfaa155a6fc80df16ab23ea84232653fb97737ee28e90257e9bd644fd42"} Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.709109 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" event={"ID":"79746e6a-bee0-42eb-840c-36075d04abde","Type":"ContainerStarted","Data":"241fe00fc340b80eefba0f92d3bdf67c1dc9e3306e497d8b33550b4e18947a08"} Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.724886 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" event={"ID":"16eb2f20-a170-4504-8601-a08d7174edef","Type":"ContainerStarted","Data":"9950cfa7893dfcae9ef6566159771b0e333f0fd846d8eadc4a654a109358eb87"} Jan 22 12:06:07 crc kubenswrapper[4874]: W0122 12:06:07.801473 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe315a0a_e88a_4a39_b705_b826a4285fc2.slice/crio-6095ec06d7054068d7a98b6e4ccf292ac930e508cf3c276df0c4600afa1fd83e WatchSource:0}: Error finding container 6095ec06d7054068d7a98b6e4ccf292ac930e508cf3c276df0c4600afa1fd83e: Status 404 returned error can't find the container with id 6095ec06d7054068d7a98b6e4ccf292ac930e508cf3c276df0c4600afa1fd83e Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.805597 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vq2h8"] Jan 22 12:06:07 crc kubenswrapper[4874]: I0122 12:06:07.905994 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-jcvt5"] Jan 22 12:06:08 crc kubenswrapper[4874]: E0122 12:06:08.268379 4874 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01e2e1fb_0e3b_4c3b_b5c5_df459233c6e3.slice/crio-conmon-699a19bb9b06b8eb15d6f8ce1695ec22e8bc5543fd8a9c4aaee428742ce39ca8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c0b7de5_b0a4_4c4d_93c3_1feba19a95a2.slice/crio-8871bcfaa155a6fc80df16ab23ea84232653fb97737ee28e90257e9bd644fd42.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01e2e1fb_0e3b_4c3b_b5c5_df459233c6e3.slice/crio-699a19bb9b06b8eb15d6f8ce1695ec22e8bc5543fd8a9c4aaee428742ce39ca8.scope\": RecentStats: unable to find data in memory cache]" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.726568 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527" path="/var/lib/kubelet/pods/c4d0bf4d-4f56-41b0-ab4a-a6603bbe0527/volumes" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.747051 4874 generic.go:334] "Generic (PLEG): container finished" podID="01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3" containerID="699a19bb9b06b8eb15d6f8ce1695ec22e8bc5543fd8a9c4aaee428742ce39ca8" exitCode=0 Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.747121 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" event={"ID":"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3","Type":"ContainerDied","Data":"699a19bb9b06b8eb15d6f8ce1695ec22e8bc5543fd8a9c4aaee428742ce39ca8"} Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.747153 4874 scope.go:117] "RemoveContainer" containerID="514e40ce0ac94f7f5ee03f53f502f032cb0df43f43b4da1487e85d3afe4ac50a" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.747713 4874 scope.go:117] "RemoveContainer" containerID="699a19bb9b06b8eb15d6f8ce1695ec22e8bc5543fd8a9c4aaee428742ce39ca8" Jan 22 12:06:08 crc kubenswrapper[4874]: E0122 12:06:08.747983 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9_service-telemetry(01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" podUID="01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.752928 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" event={"ID":"da6dd65e-8b70-41a1-8a38-83392c65aef3","Type":"ContainerStarted","Data":"e1c4dc04409f8689fa42459366ed59cdfc73e546395757896efc9a0f219b7438"} Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.752965 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" event={"ID":"da6dd65e-8b70-41a1-8a38-83392c65aef3","Type":"ContainerStarted","Data":"b080c40c0d5d91f79b4bd8a142e5f2cdb0629430058eed1d125dc0190ff3db3e"} Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.756429 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2" containerID="8871bcfaa155a6fc80df16ab23ea84232653fb97737ee28e90257e9bd644fd42" exitCode=0 Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.756479 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" event={"ID":"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2","Type":"ContainerDied","Data":"8871bcfaa155a6fc80df16ab23ea84232653fb97737ee28e90257e9bd644fd42"} Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.756838 4874 scope.go:117] "RemoveContainer" containerID="8871bcfaa155a6fc80df16ab23ea84232653fb97737ee28e90257e9bd644fd42" Jan 22 12:06:08 crc kubenswrapper[4874]: E0122 12:06:08.756999 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf_service-telemetry(7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" podUID="7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.758924 4874 generic.go:334] "Generic (PLEG): container finished" podID="79746e6a-bee0-42eb-840c-36075d04abde" containerID="241fe00fc340b80eefba0f92d3bdf67c1dc9e3306e497d8b33550b4e18947a08" exitCode=0 Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.758965 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" event={"ID":"79746e6a-bee0-42eb-840c-36075d04abde","Type":"ContainerDied","Data":"241fe00fc340b80eefba0f92d3bdf67c1dc9e3306e497d8b33550b4e18947a08"} Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.759259 4874 scope.go:117] "RemoveContainer" containerID="241fe00fc340b80eefba0f92d3bdf67c1dc9e3306e497d8b33550b4e18947a08" Jan 22 12:06:08 crc kubenswrapper[4874]: E0122 12:06:08.759449 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7_service-telemetry(79746e6a-bee0-42eb-840c-36075d04abde)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" podUID="79746e6a-bee0-42eb-840c-36075d04abde" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.787236 4874 generic.go:334] "Generic (PLEG): container finished" podID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerID="4119410dea713ff391706990d3ca3df3ffd65b7a0e739319efe9d22e1aefc65b" exitCode=0 Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.787439 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq2h8" event={"ID":"be315a0a-e88a-4a39-b705-b826a4285fc2","Type":"ContainerDied","Data":"4119410dea713ff391706990d3ca3df3ffd65b7a0e739319efe9d22e1aefc65b"} Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.787637 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq2h8" event={"ID":"be315a0a-e88a-4a39-b705-b826a4285fc2","Type":"ContainerStarted","Data":"6095ec06d7054068d7a98b6e4ccf292ac930e508cf3c276df0c4600afa1fd83e"} Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.800674 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-jcvt5" podStartSLOduration=3.800658912 podStartE2EDuration="3.800658912s" podCreationTimestamp="2026-01-22 12:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:06:08.797837826 +0000 UTC m=+1542.642908896" watchObservedRunningTime="2026-01-22 12:06:08.800658912 +0000 UTC m=+1542.645729982" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.849977 4874 scope.go:117] "RemoveContainer" containerID="461c8f650e5a8a977666075e1277e3c9267495a8c7840f9acdf0c5d6a6800f08" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.850153 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" event={"ID":"1cca42d2-5390-4dc0-90fd-6abef972f828","Type":"ContainerStarted","Data":"03a5f9d616cc942c97a12adda95ac1fb8d538715f65c06b4d599618c1bca3dc2"} Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.865782 4874 generic.go:334] "Generic (PLEG): container finished" podID="16eb2f20-a170-4504-8601-a08d7174edef" containerID="9950cfa7893dfcae9ef6566159771b0e333f0fd846d8eadc4a654a109358eb87" exitCode=0 Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.865826 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" event={"ID":"16eb2f20-a170-4504-8601-a08d7174edef","Type":"ContainerDied","Data":"9950cfa7893dfcae9ef6566159771b0e333f0fd846d8eadc4a654a109358eb87"} Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.866298 4874 scope.go:117] "RemoveContainer" containerID="9950cfa7893dfcae9ef6566159771b0e333f0fd846d8eadc4a654a109358eb87" Jan 22 12:06:08 crc kubenswrapper[4874]: E0122 12:06:08.866523 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg_service-telemetry(16eb2f20-a170-4504-8601-a08d7174edef)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" podUID="16eb2f20-a170-4504-8601-a08d7174edef" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.895591 4874 scope.go:117] "RemoveContainer" containerID="09cb582dc76bac620de55013695a6049eb521a343c611714eaf77b7038c60d7e" Jan 22 12:06:08 crc kubenswrapper[4874]: I0122 12:06:08.982753 4874 scope.go:117] "RemoveContainer" containerID="d00586b99f40c23c83e98d796f708a92dd64728a1c6fc574bfa7124a09972b2f" Jan 22 12:06:09 crc kubenswrapper[4874]: I0122 12:06:09.877713 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq2h8" event={"ID":"be315a0a-e88a-4a39-b705-b826a4285fc2","Type":"ContainerStarted","Data":"198d6b87e2554ce51600165ca3a1b9773fd1588950afac105198296df391db5b"} Jan 22 12:06:09 crc kubenswrapper[4874]: I0122 12:06:09.879465 4874 generic.go:334] "Generic (PLEG): container finished" podID="1cca42d2-5390-4dc0-90fd-6abef972f828" containerID="03a5f9d616cc942c97a12adda95ac1fb8d538715f65c06b4d599618c1bca3dc2" exitCode=0 Jan 22 12:06:09 crc kubenswrapper[4874]: I0122 12:06:09.879552 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" event={"ID":"1cca42d2-5390-4dc0-90fd-6abef972f828","Type":"ContainerDied","Data":"03a5f9d616cc942c97a12adda95ac1fb8d538715f65c06b4d599618c1bca3dc2"} Jan 22 12:06:09 crc kubenswrapper[4874]: I0122 12:06:09.879618 4874 scope.go:117] "RemoveContainer" containerID="0b8c4a0b0ba7c449f8d752c598d619f6497f68d22e47bcf468a644699c35b9f8" Jan 22 12:06:09 crc kubenswrapper[4874]: I0122 12:06:09.880115 4874 scope.go:117] "RemoveContainer" containerID="03a5f9d616cc942c97a12adda95ac1fb8d538715f65c06b4d599618c1bca3dc2" Jan 22 12:06:09 crc kubenswrapper[4874]: E0122 12:06:09.880300 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np_service-telemetry(1cca42d2-5390-4dc0-90fd-6abef972f828)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" podUID="1cca42d2-5390-4dc0-90fd-6abef972f828" Jan 22 12:06:10 crc kubenswrapper[4874]: I0122 12:06:10.901998 4874 generic.go:334] "Generic (PLEG): container finished" podID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerID="198d6b87e2554ce51600165ca3a1b9773fd1588950afac105198296df391db5b" exitCode=0 Jan 22 12:06:10 crc kubenswrapper[4874]: I0122 12:06:10.902107 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq2h8" event={"ID":"be315a0a-e88a-4a39-b705-b826a4285fc2","Type":"ContainerDied","Data":"198d6b87e2554ce51600165ca3a1b9773fd1588950afac105198296df391db5b"} Jan 22 12:06:13 crc kubenswrapper[4874]: I0122 12:06:13.520693 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:06:13 crc kubenswrapper[4874]: I0122 12:06:13.521070 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:06:18 crc kubenswrapper[4874]: I0122 12:06:18.958025 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"056ff179-9e03-4f48-b3d3-fa995d0c4093","Type":"ContainerStarted","Data":"c69f816f67c4796d2c865aeebd8564ed2b53358d7a60b573e98386afedcdfb4a"} Jan 22 12:06:18 crc kubenswrapper[4874]: I0122 12:06:18.960020 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq2h8" event={"ID":"be315a0a-e88a-4a39-b705-b826a4285fc2","Type":"ContainerStarted","Data":"d0a23445908893fbd592dcf2f022e7c2732381ff0f6eefd00b3e70840bac239c"} Jan 22 12:06:18 crc kubenswrapper[4874]: I0122 12:06:18.971042 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.324265538 podStartE2EDuration="14.9710295s" podCreationTimestamp="2026-01-22 12:06:04 +0000 UTC" firstStartedPulling="2026-01-22 12:06:05.719533517 +0000 UTC m=+1539.564604587" lastFinishedPulling="2026-01-22 12:06:18.366297469 +0000 UTC m=+1552.211368549" observedRunningTime="2026-01-22 12:06:18.969840554 +0000 UTC m=+1552.814911624" watchObservedRunningTime="2026-01-22 12:06:18.9710295 +0000 UTC m=+1552.816100570" Jan 22 12:06:18 crc kubenswrapper[4874]: I0122 12:06:18.986618 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vq2h8" podStartSLOduration=3.426900384 podStartE2EDuration="12.986600998s" podCreationTimestamp="2026-01-22 12:06:06 +0000 UTC" firstStartedPulling="2026-01-22 12:06:08.791058979 +0000 UTC m=+1542.636130049" lastFinishedPulling="2026-01-22 12:06:18.350759593 +0000 UTC m=+1552.195830663" observedRunningTime="2026-01-22 12:06:18.984310347 +0000 UTC m=+1552.829381417" watchObservedRunningTime="2026-01-22 12:06:18.986600998 +0000 UTC m=+1552.831672058" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.653443 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-fpm2k"] Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.654458 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.656749 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.656782 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.657107 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.657110 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.657318 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.657668 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.671842 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-fpm2k"] Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.715882 4874 scope.go:117] "RemoveContainer" containerID="241fe00fc340b80eefba0f92d3bdf67c1dc9e3306e497d8b33550b4e18947a08" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.749234 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-sensubility-config\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.749286 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-healthcheck-log\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.749352 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.749464 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-config\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.749508 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-publisher\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.749575 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqp6\" (UniqueName: \"kubernetes.io/projected/91078bec-5716-4d9f-ade0-53de332b1871-kube-api-access-rqqp6\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.749602 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.850840 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-sensubility-config\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.850886 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-healthcheck-log\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.850941 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.851013 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-config\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.851050 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-publisher\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.851097 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqqp6\" (UniqueName: \"kubernetes.io/projected/91078bec-5716-4d9f-ade0-53de332b1871-kube-api-access-rqqp6\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.851114 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.852226 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-config\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.852735 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-publisher\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.853146 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-sensubility-config\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.853256 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-healthcheck-log\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.853501 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.853705 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.876290 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqqp6\" (UniqueName: \"kubernetes.io/projected/91078bec-5716-4d9f-ade0-53de332b1871-kube-api-access-rqqp6\") pod \"stf-smoketest-smoke1-fpm2k\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:19 crc kubenswrapper[4874]: I0122 12:06:19.973343 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.034859 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.036004 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.069053 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.158004 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5wbx\" (UniqueName: \"kubernetes.io/projected/cd23c25e-9a15-4e31-be24-5d227c6ed353-kube-api-access-s5wbx\") pod \"curl\" (UID: \"cd23c25e-9a15-4e31-be24-5d227c6ed353\") " pod="service-telemetry/curl" Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.259703 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5wbx\" (UniqueName: \"kubernetes.io/projected/cd23c25e-9a15-4e31-be24-5d227c6ed353-kube-api-access-s5wbx\") pod \"curl\" (UID: \"cd23c25e-9a15-4e31-be24-5d227c6ed353\") " pod="service-telemetry/curl" Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.282201 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5wbx\" (UniqueName: \"kubernetes.io/projected/cd23c25e-9a15-4e31-be24-5d227c6ed353-kube-api-access-s5wbx\") pod \"curl\" (UID: \"cd23c25e-9a15-4e31-be24-5d227c6ed353\") " pod="service-telemetry/curl" Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.357524 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.412224 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-fpm2k"] Jan 22 12:06:20 crc kubenswrapper[4874]: W0122 12:06:20.420721 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91078bec_5716_4d9f_ade0_53de332b1871.slice/crio-16ed44bb34a808918bff425d12ba06fe0641976cb78ef34a60d35a441ffa9857 WatchSource:0}: Error finding container 16ed44bb34a808918bff425d12ba06fe0641976cb78ef34a60d35a441ffa9857: Status 404 returned error can't find the container with id 16ed44bb34a808918bff425d12ba06fe0641976cb78ef34a60d35a441ffa9857 Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.719500 4874 scope.go:117] "RemoveContainer" containerID="03a5f9d616cc942c97a12adda95ac1fb8d538715f65c06b4d599618c1bca3dc2" Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.720428 4874 scope.go:117] "RemoveContainer" containerID="9950cfa7893dfcae9ef6566159771b0e333f0fd846d8eadc4a654a109358eb87" Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.720805 4874 scope.go:117] "RemoveContainer" containerID="699a19bb9b06b8eb15d6f8ce1695ec22e8bc5543fd8a9c4aaee428742ce39ca8" Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.800441 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.980902 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7" event={"ID":"79746e6a-bee0-42eb-840c-36075d04abde","Type":"ContainerStarted","Data":"cea9253bbeb1e96e5fbe0807df172fd7931a2d7820e5965c098f7f75af5ec4d8"} Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.983640 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" event={"ID":"91078bec-5716-4d9f-ade0-53de332b1871","Type":"ContainerStarted","Data":"16ed44bb34a808918bff425d12ba06fe0641976cb78ef34a60d35a441ffa9857"} Jan 22 12:06:20 crc kubenswrapper[4874]: I0122 12:06:20.985106 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"cd23c25e-9a15-4e31-be24-5d227c6ed353","Type":"ContainerStarted","Data":"cd77744055f6f640939707b45933fd4765f68b786e5c80424cc767d9fde5ab83"} Jan 22 12:06:22 crc kubenswrapper[4874]: I0122 12:06:22.716480 4874 scope.go:117] "RemoveContainer" containerID="8871bcfaa155a6fc80df16ab23ea84232653fb97737ee28e90257e9bd644fd42" Jan 22 12:06:23 crc kubenswrapper[4874]: I0122 12:06:23.018057 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np" event={"ID":"1cca42d2-5390-4dc0-90fd-6abef972f828","Type":"ContainerStarted","Data":"0690e473692dea624820ef6cdbdca8f8bad3bda1058f3a6c42e0cbf6b26d1e8e"} Jan 22 12:06:23 crc kubenswrapper[4874]: I0122 12:06:23.056379 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg" event={"ID":"16eb2f20-a170-4504-8601-a08d7174edef","Type":"ContainerStarted","Data":"cfd3081857be454b30191ece8c21d3207394a76215241e108f66bf635fa62517"} Jan 22 12:06:24 crc kubenswrapper[4874]: I0122 12:06:24.067367 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf" event={"ID":"7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2","Type":"ContainerStarted","Data":"b368544b902c3c3ab9b296ade349185e174c7a61d1ba17c7faa0ef3f89219855"} Jan 22 12:06:24 crc kubenswrapper[4874]: I0122 12:06:24.070489 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9" event={"ID":"01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3","Type":"ContainerStarted","Data":"ac263c6d72ebfd50563b514b8917992ee1d7623d1fe32c5985f30e4b885f2b64"} Jan 22 12:06:25 crc kubenswrapper[4874]: I0122 12:06:25.090158 4874 generic.go:334] "Generic (PLEG): container finished" podID="cd23c25e-9a15-4e31-be24-5d227c6ed353" containerID="3632d001d41bcbca464db121b0d9020680ba26e3f8c8d7e16a16e237f6ec3f69" exitCode=0 Jan 22 12:06:25 crc kubenswrapper[4874]: I0122 12:06:25.090251 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"cd23c25e-9a15-4e31-be24-5d227c6ed353","Type":"ContainerDied","Data":"3632d001d41bcbca464db121b0d9020680ba26e3f8c8d7e16a16e237f6ec3f69"} Jan 22 12:06:27 crc kubenswrapper[4874]: I0122 12:06:27.225127 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:27 crc kubenswrapper[4874]: I0122 12:06:27.225498 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:27 crc kubenswrapper[4874]: I0122 12:06:27.267877 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:28 crc kubenswrapper[4874]: I0122 12:06:28.158052 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:28 crc kubenswrapper[4874]: I0122 12:06:28.205700 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vq2h8"] Jan 22 12:06:28 crc kubenswrapper[4874]: I0122 12:06:28.622143 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:06:28 crc kubenswrapper[4874]: I0122 12:06:28.702594 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5wbx\" (UniqueName: \"kubernetes.io/projected/cd23c25e-9a15-4e31-be24-5d227c6ed353-kube-api-access-s5wbx\") pod \"cd23c25e-9a15-4e31-be24-5d227c6ed353\" (UID: \"cd23c25e-9a15-4e31-be24-5d227c6ed353\") " Jan 22 12:06:28 crc kubenswrapper[4874]: I0122 12:06:28.721009 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd23c25e-9a15-4e31-be24-5d227c6ed353-kube-api-access-s5wbx" (OuterVolumeSpecName: "kube-api-access-s5wbx") pod "cd23c25e-9a15-4e31-be24-5d227c6ed353" (UID: "cd23c25e-9a15-4e31-be24-5d227c6ed353"). InnerVolumeSpecName "kube-api-access-s5wbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:06:28 crc kubenswrapper[4874]: I0122 12:06:28.781839 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_cd23c25e-9a15-4e31-be24-5d227c6ed353/curl/0.log" Jan 22 12:06:28 crc kubenswrapper[4874]: I0122 12:06:28.804539 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5wbx\" (UniqueName: \"kubernetes.io/projected/cd23c25e-9a15-4e31-be24-5d227c6ed353-kube-api-access-s5wbx\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:29 crc kubenswrapper[4874]: I0122 12:06:29.112011 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-9qcrx_a14f7a5d-1819-4ca2-8683-96a338c70df6/prometheus-webhook-snmp/0.log" Jan 22 12:06:29 crc kubenswrapper[4874]: I0122 12:06:29.121733 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"cd23c25e-9a15-4e31-be24-5d227c6ed353","Type":"ContainerDied","Data":"cd77744055f6f640939707b45933fd4765f68b786e5c80424cc767d9fde5ab83"} Jan 22 12:06:29 crc kubenswrapper[4874]: I0122 12:06:29.121769 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd77744055f6f640939707b45933fd4765f68b786e5c80424cc767d9fde5ab83" Jan 22 12:06:29 crc kubenswrapper[4874]: I0122 12:06:29.121872 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:06:30 crc kubenswrapper[4874]: I0122 12:06:30.129717 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vq2h8" podUID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerName="registry-server" containerID="cri-o://d0a23445908893fbd592dcf2f022e7c2732381ff0f6eefd00b3e70840bac239c" gracePeriod=2 Jan 22 12:06:31 crc kubenswrapper[4874]: I0122 12:06:31.139944 4874 generic.go:334] "Generic (PLEG): container finished" podID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerID="d0a23445908893fbd592dcf2f022e7c2732381ff0f6eefd00b3e70840bac239c" exitCode=0 Jan 22 12:06:31 crc kubenswrapper[4874]: I0122 12:06:31.139985 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq2h8" event={"ID":"be315a0a-e88a-4a39-b705-b826a4285fc2","Type":"ContainerDied","Data":"d0a23445908893fbd592dcf2f022e7c2732381ff0f6eefd00b3e70840bac239c"} Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.415474 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.569956 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj82v\" (UniqueName: \"kubernetes.io/projected/be315a0a-e88a-4a39-b705-b826a4285fc2-kube-api-access-cj82v\") pod \"be315a0a-e88a-4a39-b705-b826a4285fc2\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.570150 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-utilities\") pod \"be315a0a-e88a-4a39-b705-b826a4285fc2\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.570235 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-catalog-content\") pod \"be315a0a-e88a-4a39-b705-b826a4285fc2\" (UID: \"be315a0a-e88a-4a39-b705-b826a4285fc2\") " Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.571130 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-utilities" (OuterVolumeSpecName: "utilities") pod "be315a0a-e88a-4a39-b705-b826a4285fc2" (UID: "be315a0a-e88a-4a39-b705-b826a4285fc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.587072 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be315a0a-e88a-4a39-b705-b826a4285fc2-kube-api-access-cj82v" (OuterVolumeSpecName: "kube-api-access-cj82v") pod "be315a0a-e88a-4a39-b705-b826a4285fc2" (UID: "be315a0a-e88a-4a39-b705-b826a4285fc2"). InnerVolumeSpecName "kube-api-access-cj82v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.648917 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be315a0a-e88a-4a39-b705-b826a4285fc2" (UID: "be315a0a-e88a-4a39-b705-b826a4285fc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.671668 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.671698 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be315a0a-e88a-4a39-b705-b826a4285fc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:33 crc kubenswrapper[4874]: I0122 12:06:33.671712 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj82v\" (UniqueName: \"kubernetes.io/projected/be315a0a-e88a-4a39-b705-b826a4285fc2-kube-api-access-cj82v\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:34 crc kubenswrapper[4874]: I0122 12:06:34.167103 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq2h8" event={"ID":"be315a0a-e88a-4a39-b705-b826a4285fc2","Type":"ContainerDied","Data":"6095ec06d7054068d7a98b6e4ccf292ac930e508cf3c276df0c4600afa1fd83e"} Jan 22 12:06:34 crc kubenswrapper[4874]: I0122 12:06:34.167187 4874 scope.go:117] "RemoveContainer" containerID="d0a23445908893fbd592dcf2f022e7c2732381ff0f6eefd00b3e70840bac239c" Jan 22 12:06:34 crc kubenswrapper[4874]: I0122 12:06:34.167134 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq2h8" Jan 22 12:06:34 crc kubenswrapper[4874]: I0122 12:06:34.168898 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" event={"ID":"91078bec-5716-4d9f-ade0-53de332b1871","Type":"ContainerStarted","Data":"38b91d3438b9959ba5ddfc6ef9264d72a0bd2d404dd8c5472433d88d1af78849"} Jan 22 12:06:34 crc kubenswrapper[4874]: I0122 12:06:34.198773 4874 scope.go:117] "RemoveContainer" containerID="198d6b87e2554ce51600165ca3a1b9773fd1588950afac105198296df391db5b" Jan 22 12:06:34 crc kubenswrapper[4874]: I0122 12:06:34.222359 4874 scope.go:117] "RemoveContainer" containerID="4119410dea713ff391706990d3ca3df3ffd65b7a0e739319efe9d22e1aefc65b" Jan 22 12:06:34 crc kubenswrapper[4874]: I0122 12:06:34.223670 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vq2h8"] Jan 22 12:06:34 crc kubenswrapper[4874]: I0122 12:06:34.231809 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vq2h8"] Jan 22 12:06:34 crc kubenswrapper[4874]: I0122 12:06:34.733247 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be315a0a-e88a-4a39-b705-b826a4285fc2" path="/var/lib/kubelet/pods/be315a0a-e88a-4a39-b705-b826a4285fc2/volumes" Jan 22 12:06:39 crc kubenswrapper[4874]: I0122 12:06:39.211452 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" event={"ID":"91078bec-5716-4d9f-ade0-53de332b1871","Type":"ContainerStarted","Data":"19047b524cd4ee68868d7fda810d116221c6871b036d5650ee13bfaeb07ec127"} Jan 22 12:06:39 crc kubenswrapper[4874]: I0122 12:06:39.245034 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" podStartSLOduration=1.98738127 podStartE2EDuration="20.245007741s" podCreationTimestamp="2026-01-22 12:06:19 +0000 UTC" firstStartedPulling="2026-01-22 12:06:20.422426569 +0000 UTC m=+1554.267497639" lastFinishedPulling="2026-01-22 12:06:38.68005304 +0000 UTC m=+1572.525124110" observedRunningTime="2026-01-22 12:06:39.234474014 +0000 UTC m=+1573.079545174" watchObservedRunningTime="2026-01-22 12:06:39.245007741 +0000 UTC m=+1573.090078851" Jan 22 12:06:43 crc kubenswrapper[4874]: I0122 12:06:43.520474 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:06:43 crc kubenswrapper[4874]: I0122 12:06:43.520835 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:06:43 crc kubenswrapper[4874]: I0122 12:06:43.520881 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:06:43 crc kubenswrapper[4874]: I0122 12:06:43.521527 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:06:43 crc kubenswrapper[4874]: I0122 12:06:43.521585 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" gracePeriod=600 Jan 22 12:06:43 crc kubenswrapper[4874]: E0122 12:06:43.657483 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:06:44 crc kubenswrapper[4874]: I0122 12:06:44.260245 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" exitCode=0 Jan 22 12:06:44 crc kubenswrapper[4874]: I0122 12:06:44.260336 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15"} Jan 22 12:06:44 crc kubenswrapper[4874]: I0122 12:06:44.260445 4874 scope.go:117] "RemoveContainer" containerID="75a7548055039130cae1c6ba4be8efe015a8bd6db752337a976c1be606300781" Jan 22 12:06:44 crc kubenswrapper[4874]: I0122 12:06:44.261263 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:06:44 crc kubenswrapper[4874]: E0122 12:06:44.261996 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:06:58 crc kubenswrapper[4874]: I0122 12:06:58.717092 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:06:58 crc kubenswrapper[4874]: E0122 12:06:58.718209 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:06:59 crc kubenswrapper[4874]: I0122 12:06:59.275083 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-9qcrx_a14f7a5d-1819-4ca2-8683-96a338c70df6/prometheus-webhook-snmp/0.log" Jan 22 12:07:07 crc kubenswrapper[4874]: I0122 12:07:07.460641 4874 generic.go:334] "Generic (PLEG): container finished" podID="91078bec-5716-4d9f-ade0-53de332b1871" containerID="38b91d3438b9959ba5ddfc6ef9264d72a0bd2d404dd8c5472433d88d1af78849" exitCode=1 Jan 22 12:07:07 crc kubenswrapper[4874]: I0122 12:07:07.460736 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" event={"ID":"91078bec-5716-4d9f-ade0-53de332b1871","Type":"ContainerDied","Data":"38b91d3438b9959ba5ddfc6ef9264d72a0bd2d404dd8c5472433d88d1af78849"} Jan 22 12:07:07 crc kubenswrapper[4874]: I0122 12:07:07.462500 4874 scope.go:117] "RemoveContainer" containerID="38b91d3438b9959ba5ddfc6ef9264d72a0bd2d404dd8c5472433d88d1af78849" Jan 22 12:07:10 crc kubenswrapper[4874]: I0122 12:07:10.491960 4874 generic.go:334] "Generic (PLEG): container finished" podID="91078bec-5716-4d9f-ade0-53de332b1871" containerID="19047b524cd4ee68868d7fda810d116221c6871b036d5650ee13bfaeb07ec127" exitCode=0 Jan 22 12:07:10 crc kubenswrapper[4874]: I0122 12:07:10.492616 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" event={"ID":"91078bec-5716-4d9f-ade0-53de332b1871","Type":"ContainerDied","Data":"19047b524cd4ee68868d7fda810d116221c6871b036d5650ee13bfaeb07ec127"} Jan 22 12:07:10 crc kubenswrapper[4874]: I0122 12:07:10.718420 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:07:10 crc kubenswrapper[4874]: E0122 12:07:10.719184 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:07:11 crc kubenswrapper[4874]: I0122 12:07:11.881303 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:07:11 crc kubenswrapper[4874]: I0122 12:07:11.991889 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-sensubility-config\") pod \"91078bec-5716-4d9f-ade0-53de332b1871\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " Jan 22 12:07:11 crc kubenswrapper[4874]: I0122 12:07:11.991926 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-publisher\") pod \"91078bec-5716-4d9f-ade0-53de332b1871\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " Jan 22 12:07:11 crc kubenswrapper[4874]: I0122 12:07:11.992021 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqqp6\" (UniqueName: \"kubernetes.io/projected/91078bec-5716-4d9f-ade0-53de332b1871-kube-api-access-rqqp6\") pod \"91078bec-5716-4d9f-ade0-53de332b1871\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " Jan 22 12:07:11 crc kubenswrapper[4874]: I0122 12:07:11.992049 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-healthcheck-log\") pod \"91078bec-5716-4d9f-ade0-53de332b1871\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " Jan 22 12:07:11 crc kubenswrapper[4874]: I0122 12:07:11.992076 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-entrypoint-script\") pod \"91078bec-5716-4d9f-ade0-53de332b1871\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " Jan 22 12:07:11 crc kubenswrapper[4874]: I0122 12:07:11.992105 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-config\") pod \"91078bec-5716-4d9f-ade0-53de332b1871\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " Jan 22 12:07:11 crc kubenswrapper[4874]: I0122 12:07:11.992150 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-entrypoint-script\") pod \"91078bec-5716-4d9f-ade0-53de332b1871\" (UID: \"91078bec-5716-4d9f-ade0-53de332b1871\") " Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.000128 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91078bec-5716-4d9f-ade0-53de332b1871-kube-api-access-rqqp6" (OuterVolumeSpecName: "kube-api-access-rqqp6") pod "91078bec-5716-4d9f-ade0-53de332b1871" (UID: "91078bec-5716-4d9f-ade0-53de332b1871"). InnerVolumeSpecName "kube-api-access-rqqp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.011081 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "91078bec-5716-4d9f-ade0-53de332b1871" (UID: "91078bec-5716-4d9f-ade0-53de332b1871"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.012294 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "91078bec-5716-4d9f-ade0-53de332b1871" (UID: "91078bec-5716-4d9f-ade0-53de332b1871"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.012845 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "91078bec-5716-4d9f-ade0-53de332b1871" (UID: "91078bec-5716-4d9f-ade0-53de332b1871"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.014369 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "91078bec-5716-4d9f-ade0-53de332b1871" (UID: "91078bec-5716-4d9f-ade0-53de332b1871"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.015990 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "91078bec-5716-4d9f-ade0-53de332b1871" (UID: "91078bec-5716-4d9f-ade0-53de332b1871"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.023021 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "91078bec-5716-4d9f-ade0-53de332b1871" (UID: "91078bec-5716-4d9f-ade0-53de332b1871"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.093796 4874 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.093893 4874 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.093909 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqqp6\" (UniqueName: \"kubernetes.io/projected/91078bec-5716-4d9f-ade0-53de332b1871-kube-api-access-rqqp6\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.093918 4874 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.093929 4874 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.093939 4874 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.093947 4874 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/91078bec-5716-4d9f-ade0-53de332b1871-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.513022 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" event={"ID":"91078bec-5716-4d9f-ade0-53de332b1871","Type":"ContainerDied","Data":"16ed44bb34a808918bff425d12ba06fe0641976cb78ef34a60d35a441ffa9857"} Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.513079 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ed44bb34a808918bff425d12ba06fe0641976cb78ef34a60d35a441ffa9857" Jan 22 12:07:12 crc kubenswrapper[4874]: I0122 12:07:12.513100 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-fpm2k" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.042687 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-qb5kr"] Jan 22 12:07:20 crc kubenswrapper[4874]: E0122 12:07:20.044054 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91078bec-5716-4d9f-ade0-53de332b1871" containerName="smoketest-ceilometer" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044087 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="91078bec-5716-4d9f-ade0-53de332b1871" containerName="smoketest-ceilometer" Jan 22 12:07:20 crc kubenswrapper[4874]: E0122 12:07:20.044114 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerName="extract-utilities" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044131 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerName="extract-utilities" Jan 22 12:07:20 crc kubenswrapper[4874]: E0122 12:07:20.044147 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91078bec-5716-4d9f-ade0-53de332b1871" containerName="smoketest-collectd" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044169 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="91078bec-5716-4d9f-ade0-53de332b1871" containerName="smoketest-collectd" Jan 22 12:07:20 crc kubenswrapper[4874]: E0122 12:07:20.044190 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd23c25e-9a15-4e31-be24-5d227c6ed353" containerName="curl" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044206 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd23c25e-9a15-4e31-be24-5d227c6ed353" containerName="curl" Jan 22 12:07:20 crc kubenswrapper[4874]: E0122 12:07:20.044240 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerName="extract-content" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044255 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerName="extract-content" Jan 22 12:07:20 crc kubenswrapper[4874]: E0122 12:07:20.044288 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerName="registry-server" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044300 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerName="registry-server" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044539 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd23c25e-9a15-4e31-be24-5d227c6ed353" containerName="curl" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044557 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="91078bec-5716-4d9f-ade0-53de332b1871" containerName="smoketest-ceilometer" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044582 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="be315a0a-e88a-4a39-b705-b826a4285fc2" containerName="registry-server" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.044600 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="91078bec-5716-4d9f-ade0-53de332b1871" containerName="smoketest-collectd" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.046130 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.049800 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.050517 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.051034 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-qb5kr"] Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.051672 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.052126 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.052303 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.052836 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.129294 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mt5z\" (UniqueName: \"kubernetes.io/projected/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-kube-api-access-2mt5z\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.129347 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.129468 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-publisher\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.129527 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-config\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.129558 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-sensubility-config\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.129597 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.129642 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-healthcheck-log\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.231744 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-publisher\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.231956 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-config\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.232148 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-sensubility-config\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.232345 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.232465 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-healthcheck-log\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.232588 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mt5z\" (UniqueName: \"kubernetes.io/projected/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-kube-api-access-2mt5z\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.232741 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.233962 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-sensubility-config\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.234037 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.234110 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-config\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.234427 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.235186 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-healthcheck-log\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.235714 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-publisher\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.258256 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mt5z\" (UniqueName: \"kubernetes.io/projected/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-kube-api-access-2mt5z\") pod \"stf-smoketest-smoke1-qb5kr\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.394888 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:20 crc kubenswrapper[4874]: I0122 12:07:20.667773 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-qb5kr"] Jan 22 12:07:21 crc kubenswrapper[4874]: I0122 12:07:21.602243 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" event={"ID":"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade","Type":"ContainerStarted","Data":"5db0d7d6f5affaaef96d0acc5b44b5c8ab6420c145f93bedc894a1712c50851b"} Jan 22 12:07:21 crc kubenswrapper[4874]: I0122 12:07:21.602952 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" event={"ID":"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade","Type":"ContainerStarted","Data":"1cdaa03af8fbf3738ed2d9e7fe71986bd4c0931da9f26d90d6445d92aef346c4"} Jan 22 12:07:21 crc kubenswrapper[4874]: I0122 12:07:21.602980 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" event={"ID":"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade","Type":"ContainerStarted","Data":"3c6c3534d1ad955c051613aa5e3cd4f190eb991eaeb3ea432b665596e6886c77"} Jan 22 12:07:21 crc kubenswrapper[4874]: I0122 12:07:21.632067 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" podStartSLOduration=1.6320396019999999 podStartE2EDuration="1.632039602s" podCreationTimestamp="2026-01-22 12:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:07:21.629247695 +0000 UTC m=+1615.474318765" watchObservedRunningTime="2026-01-22 12:07:21.632039602 +0000 UTC m=+1615.477110672" Jan 22 12:07:23 crc kubenswrapper[4874]: I0122 12:07:23.716365 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:07:23 crc kubenswrapper[4874]: E0122 12:07:23.717111 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:07:35 crc kubenswrapper[4874]: I0122 12:07:35.715895 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:07:35 crc kubenswrapper[4874]: E0122 12:07:35.717459 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:07:49 crc kubenswrapper[4874]: I0122 12:07:49.715975 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:07:49 crc kubenswrapper[4874]: E0122 12:07:49.717039 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:07:53 crc kubenswrapper[4874]: I0122 12:07:53.886939 4874 generic.go:334] "Generic (PLEG): container finished" podID="c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" containerID="5db0d7d6f5affaaef96d0acc5b44b5c8ab6420c145f93bedc894a1712c50851b" exitCode=0 Jan 22 12:07:53 crc kubenswrapper[4874]: I0122 12:07:53.887015 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" event={"ID":"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade","Type":"ContainerDied","Data":"5db0d7d6f5affaaef96d0acc5b44b5c8ab6420c145f93bedc894a1712c50851b"} Jan 22 12:07:53 crc kubenswrapper[4874]: I0122 12:07:53.888502 4874 scope.go:117] "RemoveContainer" containerID="5db0d7d6f5affaaef96d0acc5b44b5c8ab6420c145f93bedc894a1712c50851b" Jan 22 12:07:54 crc kubenswrapper[4874]: I0122 12:07:54.896642 4874 generic.go:334] "Generic (PLEG): container finished" podID="c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" containerID="1cdaa03af8fbf3738ed2d9e7fe71986bd4c0931da9f26d90d6445d92aef346c4" exitCode=0 Jan 22 12:07:54 crc kubenswrapper[4874]: I0122 12:07:54.896693 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" event={"ID":"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade","Type":"ContainerDied","Data":"1cdaa03af8fbf3738ed2d9e7fe71986bd4c0931da9f26d90d6445d92aef346c4"} Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.290205 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.486696 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-sensubility-config\") pod \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.487045 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-entrypoint-script\") pod \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.487229 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-publisher\") pod \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.487363 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-healthcheck-log\") pod \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.487539 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mt5z\" (UniqueName: \"kubernetes.io/projected/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-kube-api-access-2mt5z\") pod \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.487730 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-config\") pod \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.487863 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-entrypoint-script\") pod \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\" (UID: \"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade\") " Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.496738 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-kube-api-access-2mt5z" (OuterVolumeSpecName: "kube-api-access-2mt5z") pod "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" (UID: "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade"). InnerVolumeSpecName "kube-api-access-2mt5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.506772 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" (UID: "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.509471 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" (UID: "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.510200 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" (UID: "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.512604 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" (UID: "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.515133 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" (UID: "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.529096 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" (UID: "c1f3f1ed-6e23-4ad5-877c-522b9cb42ade"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.589371 4874 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.589461 4874 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.589480 4874 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.589500 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mt5z\" (UniqueName: \"kubernetes.io/projected/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-kube-api-access-2mt5z\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.589517 4874 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.589533 4874 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.589549 4874 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/c1f3f1ed-6e23-4ad5-877c-522b9cb42ade-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.917620 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" event={"ID":"c1f3f1ed-6e23-4ad5-877c-522b9cb42ade","Type":"ContainerDied","Data":"3c6c3534d1ad955c051613aa5e3cd4f190eb991eaeb3ea432b665596e6886c77"} Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.917715 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c6c3534d1ad955c051613aa5e3cd4f190eb991eaeb3ea432b665596e6886c77" Jan 22 12:07:56 crc kubenswrapper[4874]: I0122 12:07:56.917800 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qb5kr" Jan 22 12:07:58 crc kubenswrapper[4874]: I0122 12:07:58.510299 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-fpm2k_91078bec-5716-4d9f-ade0-53de332b1871/smoketest-collectd/0.log" Jan 22 12:07:58 crc kubenswrapper[4874]: I0122 12:07:58.869748 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-fpm2k_91078bec-5716-4d9f-ade0-53de332b1871/smoketest-ceilometer/0.log" Jan 22 12:07:59 crc kubenswrapper[4874]: I0122 12:07:59.162535 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-jcvt5_da6dd65e-8b70-41a1-8a38-83392c65aef3/default-interconnect/0.log" Jan 22 12:07:59 crc kubenswrapper[4874]: I0122 12:07:59.479110 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg_16eb2f20-a170-4504-8601-a08d7174edef/bridge/2.log" Jan 22 12:07:59 crc kubenswrapper[4874]: I0122 12:07:59.764628 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg_16eb2f20-a170-4504-8601-a08d7174edef/sg-core/0.log" Jan 22 12:08:00 crc kubenswrapper[4874]: I0122 12:08:00.013565 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7_79746e6a-bee0-42eb-840c-36075d04abde/bridge/2.log" Jan 22 12:08:00 crc kubenswrapper[4874]: I0122 12:08:00.296986 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7_79746e6a-bee0-42eb-840c-36075d04abde/sg-core/0.log" Jan 22 12:08:00 crc kubenswrapper[4874]: I0122 12:08:00.586517 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np_1cca42d2-5390-4dc0-90fd-6abef972f828/bridge/2.log" Jan 22 12:08:00 crc kubenswrapper[4874]: I0122 12:08:00.846144 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np_1cca42d2-5390-4dc0-90fd-6abef972f828/sg-core/0.log" Jan 22 12:08:01 crc kubenswrapper[4874]: I0122 12:08:01.111598 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9_01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3/bridge/2.log" Jan 22 12:08:01 crc kubenswrapper[4874]: I0122 12:08:01.364928 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9_01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3/sg-core/0.log" Jan 22 12:08:01 crc kubenswrapper[4874]: I0122 12:08:01.614024 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf_7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2/bridge/2.log" Jan 22 12:08:01 crc kubenswrapper[4874]: I0122 12:08:01.857007 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf_7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2/sg-core/0.log" Jan 22 12:08:03 crc kubenswrapper[4874]: I0122 12:08:03.716258 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:08:03 crc kubenswrapper[4874]: E0122 12:08:03.716970 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:08:05 crc kubenswrapper[4874]: I0122 12:08:05.681337 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-779b7b47dc-4zkhw_f01ecec4-43d6-424e-843b-ef89a732199e/operator/0.log" Jan 22 12:08:06 crc kubenswrapper[4874]: I0122 12:08:06.959776 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7cd2b3c5-32a6-49c1-984c-86aa8fe36f85/prometheus/0.log" Jan 22 12:08:07 crc kubenswrapper[4874]: I0122 12:08:07.235322 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_c9fa7f48-985a-46b3-a7dc-1b39dfc14243/elasticsearch/0.log" Jan 22 12:08:07 crc kubenswrapper[4874]: I0122 12:08:07.498051 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-9qcrx_a14f7a5d-1819-4ca2-8683-96a338c70df6/prometheus-webhook-snmp/0.log" Jan 22 12:08:07 crc kubenswrapper[4874]: I0122 12:08:07.790178 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_30722d93-5804-4aba-a6e3-e2b891356163/alertmanager/0.log" Jan 22 12:08:14 crc kubenswrapper[4874]: I0122 12:08:14.717178 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:08:14 crc kubenswrapper[4874]: E0122 12:08:14.717991 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:08:23 crc kubenswrapper[4874]: I0122 12:08:23.055527 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-66768bfd46-6v9td_9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413/operator/0.log" Jan 22 12:08:26 crc kubenswrapper[4874]: I0122 12:08:26.578286 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-779b7b47dc-4zkhw_f01ecec4-43d6-424e-843b-ef89a732199e/operator/0.log" Jan 22 12:08:26 crc kubenswrapper[4874]: I0122 12:08:26.914077 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_056ff179-9e03-4f48-b3d3-fa995d0c4093/qdr/0.log" Jan 22 12:08:27 crc kubenswrapper[4874]: I0122 12:08:27.716823 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:08:27 crc kubenswrapper[4874]: E0122 12:08:27.717212 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:08:39 crc kubenswrapper[4874]: I0122 12:08:39.716819 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:08:39 crc kubenswrapper[4874]: E0122 12:08:39.720306 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.392065 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-p8jdn/must-gather-vjgnn"] Jan 22 12:08:51 crc kubenswrapper[4874]: E0122 12:08:51.392737 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" containerName="smoketest-collectd" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.392750 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" containerName="smoketest-collectd" Jan 22 12:08:51 crc kubenswrapper[4874]: E0122 12:08:51.392759 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" containerName="smoketest-ceilometer" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.392765 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" containerName="smoketest-ceilometer" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.392881 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" containerName="smoketest-collectd" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.392902 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f3f1ed-6e23-4ad5-877c-522b9cb42ade" containerName="smoketest-ceilometer" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.393631 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8jdn/must-gather-vjgnn" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.396412 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p8jdn"/"kube-root-ca.crt" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.396607 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-p8jdn"/"default-dockercfg-fcw2b" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.396619 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-p8jdn"/"openshift-service-ca.crt" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.406856 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p8jdn/must-gather-vjgnn"] Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.509609 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6lg5\" (UniqueName: \"kubernetes.io/projected/c0158516-a991-4b44-a001-e94e467d86bb-kube-api-access-c6lg5\") pod \"must-gather-vjgnn\" (UID: \"c0158516-a991-4b44-a001-e94e467d86bb\") " pod="openshift-must-gather-p8jdn/must-gather-vjgnn" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.509694 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c0158516-a991-4b44-a001-e94e467d86bb-must-gather-output\") pod \"must-gather-vjgnn\" (UID: \"c0158516-a991-4b44-a001-e94e467d86bb\") " pod="openshift-must-gather-p8jdn/must-gather-vjgnn" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.611365 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c0158516-a991-4b44-a001-e94e467d86bb-must-gather-output\") pod \"must-gather-vjgnn\" (UID: \"c0158516-a991-4b44-a001-e94e467d86bb\") " pod="openshift-must-gather-p8jdn/must-gather-vjgnn" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.611473 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6lg5\" (UniqueName: \"kubernetes.io/projected/c0158516-a991-4b44-a001-e94e467d86bb-kube-api-access-c6lg5\") pod \"must-gather-vjgnn\" (UID: \"c0158516-a991-4b44-a001-e94e467d86bb\") " pod="openshift-must-gather-p8jdn/must-gather-vjgnn" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.611840 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c0158516-a991-4b44-a001-e94e467d86bb-must-gather-output\") pod \"must-gather-vjgnn\" (UID: \"c0158516-a991-4b44-a001-e94e467d86bb\") " pod="openshift-must-gather-p8jdn/must-gather-vjgnn" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.646039 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6lg5\" (UniqueName: \"kubernetes.io/projected/c0158516-a991-4b44-a001-e94e467d86bb-kube-api-access-c6lg5\") pod \"must-gather-vjgnn\" (UID: \"c0158516-a991-4b44-a001-e94e467d86bb\") " pod="openshift-must-gather-p8jdn/must-gather-vjgnn" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.719248 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-p8jdn/must-gather-vjgnn" Jan 22 12:08:51 crc kubenswrapper[4874]: I0122 12:08:51.926136 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-p8jdn/must-gather-vjgnn"] Jan 22 12:08:52 crc kubenswrapper[4874]: I0122 12:08:52.431064 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8jdn/must-gather-vjgnn" event={"ID":"c0158516-a991-4b44-a001-e94e467d86bb","Type":"ContainerStarted","Data":"c77ee96dabe444d06835fd65909ea3e64c10ef2ade708b20a2ec3f63f8d90fd9"} Jan 22 12:08:54 crc kubenswrapper[4874]: I0122 12:08:54.716566 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:08:54 crc kubenswrapper[4874]: E0122 12:08:54.716998 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:08:59 crc kubenswrapper[4874]: I0122 12:08:59.492683 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8jdn/must-gather-vjgnn" event={"ID":"c0158516-a991-4b44-a001-e94e467d86bb","Type":"ContainerStarted","Data":"4f99b8d22b3c6ac4771f24a766d4e5db8e117f232ae19fdaee5bbeae454a8a40"} Jan 22 12:08:59 crc kubenswrapper[4874]: I0122 12:08:59.494451 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-p8jdn/must-gather-vjgnn" event={"ID":"c0158516-a991-4b44-a001-e94e467d86bb","Type":"ContainerStarted","Data":"7b79b2c0b847e1079a6673f2439895793e3cb542df4be3a5ec6e2d56fc4f8a88"} Jan 22 12:08:59 crc kubenswrapper[4874]: I0122 12:08:59.516105 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-p8jdn/must-gather-vjgnn" podStartSLOduration=1.4874119860000001 podStartE2EDuration="8.516078843s" podCreationTimestamp="2026-01-22 12:08:51 +0000 UTC" firstStartedPulling="2026-01-22 12:08:51.939875555 +0000 UTC m=+1705.784946625" lastFinishedPulling="2026-01-22 12:08:58.968542402 +0000 UTC m=+1712.813613482" observedRunningTime="2026-01-22 12:08:59.50660852 +0000 UTC m=+1713.351679600" watchObservedRunningTime="2026-01-22 12:08:59.516078843 +0000 UTC m=+1713.361149933" Jan 22 12:09:09 crc kubenswrapper[4874]: I0122 12:09:09.716513 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:09:09 crc kubenswrapper[4874]: E0122 12:09:09.717300 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:09:13 crc kubenswrapper[4874]: I0122 12:09:13.094381 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6jqjk_a785e748-a557-4c24-8a4d-dc03cfc3c357/control-plane-machine-set-operator/0.log" Jan 22 12:09:13 crc kubenswrapper[4874]: I0122 12:09:13.122393 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wmwcz_9bec261f-fd1e-44f7-a402-cae34f722b6c/kube-rbac-proxy/0.log" Jan 22 12:09:13 crc kubenswrapper[4874]: I0122 12:09:13.133631 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wmwcz_9bec261f-fd1e-44f7-a402-cae34f722b6c/machine-api-operator/0.log" Jan 22 12:09:18 crc kubenswrapper[4874]: I0122 12:09:18.447073 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-7nhkf_50438a7c-3a86-4aeb-a9c6-10ccb68f4593/cert-manager-controller/0.log" Jan 22 12:09:18 crc kubenswrapper[4874]: I0122 12:09:18.465813 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-mnt2m_b35625a1-4b85-4915-8ac0-97ff11979513/cert-manager-cainjector/0.log" Jan 22 12:09:18 crc kubenswrapper[4874]: I0122 12:09:18.477288 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-cvlzs_bef508e9-b648-4c62-bc0d-91bf604067ed/cert-manager-webhook/0.log" Jan 22 12:09:24 crc kubenswrapper[4874]: I0122 12:09:24.339017 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w7v89_71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2/prometheus-operator/0.log" Jan 22 12:09:24 crc kubenswrapper[4874]: I0122 12:09:24.350922 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr_7a18bf90-2af7-4296-9416-1368e89a8a03/prometheus-operator-admission-webhook/0.log" Jan 22 12:09:24 crc kubenswrapper[4874]: I0122 12:09:24.365153 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8_60c00b52-d6b8-440e-9e60-966b44a87577/prometheus-operator-admission-webhook/0.log" Jan 22 12:09:24 crc kubenswrapper[4874]: I0122 12:09:24.389846 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-jblsg_edd7744e-1336-486a-90de-6568bae7f788/operator/0.log" Jan 22 12:09:24 crc kubenswrapper[4874]: I0122 12:09:24.407221 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-kqrkd_97207973-f6a4-4068-b777-c964c12092fd/perses-operator/0.log" Jan 22 12:09:24 crc kubenswrapper[4874]: I0122 12:09:24.715873 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:09:24 crc kubenswrapper[4874]: E0122 12:09:24.716092 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:09:29 crc kubenswrapper[4874]: I0122 12:09:29.892352 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5_9a48ea4e-8d83-4484-8cab-e9a38e86a2e1/extract/0.log" Jan 22 12:09:29 crc kubenswrapper[4874]: I0122 12:09:29.910213 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5_9a48ea4e-8d83-4484-8cab-e9a38e86a2e1/util/0.log" Jan 22 12:09:29 crc kubenswrapper[4874]: I0122 12:09:29.948128 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931az8rt5_9a48ea4e-8d83-4484-8cab-e9a38e86a2e1/pull/0.log" Jan 22 12:09:29 crc kubenswrapper[4874]: I0122 12:09:29.958374 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf_4db7503f-969b-4507-81bc-fb6ae0579495/extract/0.log" Jan 22 12:09:29 crc kubenswrapper[4874]: I0122 12:09:29.968780 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf_4db7503f-969b-4507-81bc-fb6ae0579495/util/0.log" Jan 22 12:09:29 crc kubenswrapper[4874]: I0122 12:09:29.977774 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fn86tf_4db7503f-969b-4507-81bc-fb6ae0579495/pull/0.log" Jan 22 12:09:29 crc kubenswrapper[4874]: I0122 12:09:29.989746 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7_adad4abb-e416-4993-a166-0ac45ac75521/extract/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.000985 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7_adad4abb-e416-4993-a166-0ac45ac75521/util/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.009212 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ephtv7_adad4abb-e416-4993-a166-0ac45ac75521/pull/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.024844 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk_9634e8c9-9571-45b6-ad7f-c7d68d40c75a/extract/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.038411 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk_9634e8c9-9571-45b6-ad7f-c7d68d40c75a/util/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.045387 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087fnfk_9634e8c9-9571-45b6-ad7f-c7d68d40c75a/pull/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.194678 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98926_4c478499-7674-4b38-b0ea-15b5d9bc4702/registry-server/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.199580 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98926_4c478499-7674-4b38-b0ea-15b5d9bc4702/extract-utilities/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.206686 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98926_4c478499-7674-4b38-b0ea-15b5d9bc4702/extract-content/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.550937 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g952h_1b32604b-2210-43c4-9b45-a833b8bdff64/registry-server/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.555434 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g952h_1b32604b-2210-43c4-9b45-a833b8bdff64/extract-utilities/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.561033 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g952h_1b32604b-2210-43c4-9b45-a833b8bdff64/extract-content/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.573215 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-st8lg_bb25a3d3-60b1-43ae-b007-19b20c362414/marketplace-operator/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.842424 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7qhz5_0684b3da-a3b2-496b-be2e-b8e0fe8e1277/registry-server/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.847677 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7qhz5_0684b3da-a3b2-496b-be2e-b8e0fe8e1277/extract-utilities/0.log" Jan 22 12:09:30 crc kubenswrapper[4874]: I0122 12:09:30.854875 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7qhz5_0684b3da-a3b2-496b-be2e-b8e0fe8e1277/extract-content/0.log" Jan 22 12:09:35 crc kubenswrapper[4874]: I0122 12:09:35.413106 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w7v89_71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2/prometheus-operator/0.log" Jan 22 12:09:35 crc kubenswrapper[4874]: I0122 12:09:35.424864 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr_7a18bf90-2af7-4296-9416-1368e89a8a03/prometheus-operator-admission-webhook/0.log" Jan 22 12:09:35 crc kubenswrapper[4874]: I0122 12:09:35.436773 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8_60c00b52-d6b8-440e-9e60-966b44a87577/prometheus-operator-admission-webhook/0.log" Jan 22 12:09:35 crc kubenswrapper[4874]: I0122 12:09:35.453243 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-jblsg_edd7744e-1336-486a-90de-6568bae7f788/operator/0.log" Jan 22 12:09:35 crc kubenswrapper[4874]: I0122 12:09:35.469122 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-kqrkd_97207973-f6a4-4068-b777-c964c12092fd/perses-operator/0.log" Jan 22 12:09:39 crc kubenswrapper[4874]: I0122 12:09:39.716723 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:09:39 crc kubenswrapper[4874]: E0122 12:09:39.717519 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:09:45 crc kubenswrapper[4874]: I0122 12:09:45.364127 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w7v89_71a74c6d-7ff3-4a3a-8c9d-bfa212b381b2/prometheus-operator/0.log" Jan 22 12:09:45 crc kubenswrapper[4874]: I0122 12:09:45.398861 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d95b449b4-2rphr_7a18bf90-2af7-4296-9416-1368e89a8a03/prometheus-operator-admission-webhook/0.log" Jan 22 12:09:45 crc kubenswrapper[4874]: I0122 12:09:45.409241 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d95b449b4-d8hs8_60c00b52-d6b8-440e-9e60-966b44a87577/prometheus-operator-admission-webhook/0.log" Jan 22 12:09:45 crc kubenswrapper[4874]: I0122 12:09:45.427884 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-jblsg_edd7744e-1336-486a-90de-6568bae7f788/operator/0.log" Jan 22 12:09:45 crc kubenswrapper[4874]: I0122 12:09:45.437074 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-kqrkd_97207973-f6a4-4068-b777-c964c12092fd/perses-operator/0.log" Jan 22 12:09:45 crc kubenswrapper[4874]: I0122 12:09:45.533627 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-7nhkf_50438a7c-3a86-4aeb-a9c6-10ccb68f4593/cert-manager-controller/0.log" Jan 22 12:09:45 crc kubenswrapper[4874]: I0122 12:09:45.550916 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-mnt2m_b35625a1-4b85-4915-8ac0-97ff11979513/cert-manager-cainjector/0.log" Jan 22 12:09:45 crc kubenswrapper[4874]: I0122 12:09:45.561406 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-cvlzs_bef508e9-b648-4c62-bc0d-91bf604067ed/cert-manager-webhook/0.log" Jan 22 12:09:46 crc kubenswrapper[4874]: I0122 12:09:46.024600 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-7nhkf_50438a7c-3a86-4aeb-a9c6-10ccb68f4593/cert-manager-controller/0.log" Jan 22 12:09:46 crc kubenswrapper[4874]: I0122 12:09:46.047382 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-mnt2m_b35625a1-4b85-4915-8ac0-97ff11979513/cert-manager-cainjector/0.log" Jan 22 12:09:46 crc kubenswrapper[4874]: I0122 12:09:46.055482 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-cvlzs_bef508e9-b648-4c62-bc0d-91bf604067ed/cert-manager-webhook/0.log" Jan 22 12:09:46 crc kubenswrapper[4874]: I0122 12:09:46.582787 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6jqjk_a785e748-a557-4c24-8a4d-dc03cfc3c357/control-plane-machine-set-operator/0.log" Jan 22 12:09:46 crc kubenswrapper[4874]: I0122 12:09:46.597641 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wmwcz_9bec261f-fd1e-44f7-a402-cae34f722b6c/kube-rbac-proxy/0.log" Jan 22 12:09:46 crc kubenswrapper[4874]: I0122 12:09:46.605528 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wmwcz_9bec261f-fd1e-44f7-a402-cae34f722b6c/machine-api-operator/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.214474 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_30722d93-5804-4aba-a6e3-e2b891356163/alertmanager/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.221880 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_30722d93-5804-4aba-a6e3-e2b891356163/config-reloader/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.233215 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_30722d93-5804-4aba-a6e3-e2b891356163/oauth-proxy/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.240855 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_30722d93-5804-4aba-a6e3-e2b891356163/init-config-reloader/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.252137 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_cd23c25e-9a15-4e31-be24-5d227c6ed353/curl/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.264387 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9_01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3/bridge/2.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.264543 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9_01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3/bridge/1.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.270258 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-6d59d88f55-dg4t9_01e2e1fb-0e3b-4c3b-b5c5-df459233c6e3/sg-core/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.284607 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np_1cca42d2-5390-4dc0-90fd-6abef972f828/oauth-proxy/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.290532 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np_1cca42d2-5390-4dc0-90fd-6abef972f828/bridge/2.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.290810 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np_1cca42d2-5390-4dc0-90fd-6abef972f828/bridge/1.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.299963 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-hv6np_1cca42d2-5390-4dc0-90fd-6abef972f828/sg-core/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.314262 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7_79746e6a-bee0-42eb-840c-36075d04abde/bridge/1.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.314837 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7_79746e6a-bee0-42eb-840c-36075d04abde/bridge/2.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.320868 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-684996bbcd-t6hb7_79746e6a-bee0-42eb-840c-36075d04abde/sg-core/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.331367 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg_16eb2f20-a170-4504-8601-a08d7174edef/oauth-proxy/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.338272 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg_16eb2f20-a170-4504-8601-a08d7174edef/bridge/2.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.338494 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg_16eb2f20-a170-4504-8601-a08d7174edef/bridge/1.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.343893 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-h9sbg_16eb2f20-a170-4504-8601-a08d7174edef/sg-core/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.356039 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf_7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2/oauth-proxy/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.361466 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf_7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2/bridge/1.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.361511 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf_7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2/bridge/2.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.367031 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cx5rf_7c0b7de5-b0a4-4c4d-93c3-1feba19a95a2/sg-core/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.388207 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-jcvt5_da6dd65e-8b70-41a1-8a38-83392c65aef3/default-interconnect/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.395840 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-9qcrx_a14f7a5d-1819-4ca2-8683-96a338c70df6/prometheus-webhook-snmp/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.428154 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-864f7dd768-szq44_b54bc0d4-cf91-46af-bc4b-ea963cbd59bf/manager/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.459357 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_c9fa7f48-985a-46b3-a7dc-1b39dfc14243/elasticsearch/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.467529 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_c9fa7f48-985a-46b3-a7dc-1b39dfc14243/elastic-internal-init-filesystem/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.478020 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_c9fa7f48-985a-46b3-a7dc-1b39dfc14243/elastic-internal-suspend/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.497284 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-5bb49f789d-p9brk_cd733750-1915-4813-9e44-ec3777ce9c53/interconnect-operator/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.513228 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7cd2b3c5-32a6-49c1-984c-86aa8fe36f85/prometheus/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.521781 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7cd2b3c5-32a6-49c1-984c-86aa8fe36f85/config-reloader/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.533584 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7cd2b3c5-32a6-49c1-984c-86aa8fe36f85/oauth-proxy/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.539627 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7cd2b3c5-32a6-49c1-984c-86aa8fe36f85/init-config-reloader/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.574849 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_0930f68b-3161-43da-80c9-440cf31d98b9/docker-build/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.579467 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_0930f68b-3161-43da-80c9-440cf31d98b9/git-clone/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.586764 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_0930f68b-3161-43da-80c9-440cf31d98b9/manage-dockerfile/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.604788 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_056ff179-9e03-4f48-b3d3-fa995d0c4093/qdr/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.645072 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_b6347418-07a3-41af-aea9-1eddb77e64fb/docker-build/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.703054 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_b6347418-07a3-41af-aea9-1eddb77e64fb/git-clone/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.711478 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_b6347418-07a3-41af-aea9-1eddb77e64fb/manage-dockerfile/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.879980 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-66768bfd46-6v9td_9c1cf40e-37fa-4e5a-8bb3-8ee8aee9c413/operator/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.921744 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_c087bdb2-7a13-44f4-a3e8-d023752790b1/docker-build/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.926839 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_c087bdb2-7a13-44f4-a3e8-d023752790b1/git-clone/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.939220 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_c087bdb2-7a13-44f4-a3e8-d023752790b1/manage-dockerfile/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.975360 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_d9c696c3-8c17-45a3-93d2-75801fa0bff4/docker-build/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.981046 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_d9c696c3-8c17-45a3-93d2-75801fa0bff4/git-clone/0.log" Jan 22 12:09:47 crc kubenswrapper[4874]: I0122 12:09:47.987818 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_d9c696c3-8c17-45a3-93d2-75801fa0bff4/manage-dockerfile/0.log" Jan 22 12:09:48 crc kubenswrapper[4874]: I0122 12:09:48.054551 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_e36b7db7-fd44-4552-8710-adb858c931c9/docker-build/0.log" Jan 22 12:09:48 crc kubenswrapper[4874]: I0122 12:09:48.059570 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_e36b7db7-fd44-4552-8710-adb858c931c9/git-clone/0.log" Jan 22 12:09:48 crc kubenswrapper[4874]: I0122 12:09:48.067094 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_e36b7db7-fd44-4552-8710-adb858c931c9/manage-dockerfile/0.log" Jan 22 12:09:50 crc kubenswrapper[4874]: I0122 12:09:50.716299 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:09:50 crc kubenswrapper[4874]: E0122 12:09:50.716819 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:09:51 crc kubenswrapper[4874]: I0122 12:09:51.149650 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-779b7b47dc-4zkhw_f01ecec4-43d6-424e-843b-ef89a732199e/operator/0.log" Jan 22 12:09:51 crc kubenswrapper[4874]: I0122 12:09:51.166022 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-fpm2k_91078bec-5716-4d9f-ade0-53de332b1871/smoketest-collectd/0.log" Jan 22 12:09:51 crc kubenswrapper[4874]: I0122 12:09:51.171179 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-fpm2k_91078bec-5716-4d9f-ade0-53de332b1871/smoketest-ceilometer/0.log" Jan 22 12:09:51 crc kubenswrapper[4874]: I0122 12:09:51.186859 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-qb5kr_c1f3f1ed-6e23-4ad5-877c-522b9cb42ade/smoketest-collectd/0.log" Jan 22 12:09:51 crc kubenswrapper[4874]: I0122 12:09:51.191307 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-qb5kr_c1f3f1ed-6e23-4ad5-877c-522b9cb42ade/smoketest-ceilometer/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.569114 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pdb2m_0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4/kube-multus-additional-cni-plugins/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.577662 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pdb2m_0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4/egress-router-binary-copy/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.586807 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pdb2m_0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4/cni-plugins/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.594969 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pdb2m_0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4/bond-cni-plugin/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.603587 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pdb2m_0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4/routeoverride-cni/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.611787 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pdb2m_0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4/whereabouts-cni-bincopy/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.623696 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pdb2m_0dbaa8a2-585e-4ab7-a8f5-7b224ad8c5c4/whereabouts-cni/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.636675 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-f64ps_1795c220-db74-434e-9111-917ff6d95077/multus-admission-controller/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.644567 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-f64ps_1795c220-db74-434e-9111-917ff6d95077/kube-rbac-proxy/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.670862 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/3.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.685960 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krrtc_977746b5-ac1b-4b6e-bdbc-ddd90225e68c/kube-multus/2.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.705577 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-lr2vd_5451fbab-ebad-42e7-bb80-f94bad10d571/network-metrics-daemon/0.log" Jan 22 12:09:52 crc kubenswrapper[4874]: I0122 12:09:52.710769 4874 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-lr2vd_5451fbab-ebad-42e7-bb80-f94bad10d571/kube-rbac-proxy/0.log" Jan 22 12:10:05 crc kubenswrapper[4874]: I0122 12:10:05.715767 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:10:05 crc kubenswrapper[4874]: E0122 12:10:05.716623 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:10:19 crc kubenswrapper[4874]: I0122 12:10:19.717310 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:10:19 crc kubenswrapper[4874]: E0122 12:10:19.718383 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:10:30 crc kubenswrapper[4874]: I0122 12:10:30.716727 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:10:30 crc kubenswrapper[4874]: E0122 12:10:30.717800 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:10:42 crc kubenswrapper[4874]: I0122 12:10:42.716929 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:10:42 crc kubenswrapper[4874]: E0122 12:10:42.717804 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:10:54 crc kubenswrapper[4874]: I0122 12:10:54.717155 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:10:54 crc kubenswrapper[4874]: E0122 12:10:54.717872 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:11:08 crc kubenswrapper[4874]: I0122 12:11:08.719031 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:11:08 crc kubenswrapper[4874]: E0122 12:11:08.719794 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:11:21 crc kubenswrapper[4874]: I0122 12:11:21.715999 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:11:21 crc kubenswrapper[4874]: E0122 12:11:21.716774 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:11:33 crc kubenswrapper[4874]: I0122 12:11:33.716913 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:11:33 crc kubenswrapper[4874]: E0122 12:11:33.718045 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:11:46 crc kubenswrapper[4874]: I0122 12:11:46.728341 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:11:46 crc kubenswrapper[4874]: I0122 12:11:46.992090 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"4bd591f21c90200802db0c3a40e96e2477cc7ef98ac950449bf351a75f78ca92"} Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.751701 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h5v4d"] Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.754798 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.761874 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h5v4d"] Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.810180 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-utilities\") pod \"certified-operators-h5v4d\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.810311 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-catalog-content\") pod \"certified-operators-h5v4d\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.810441 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbxb\" (UniqueName: \"kubernetes.io/projected/9bf24c61-16db-4593-ba56-3e13cf4b2320-kube-api-access-2mbxb\") pod \"certified-operators-h5v4d\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.912137 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-utilities\") pod \"certified-operators-h5v4d\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.912474 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-catalog-content\") pod \"certified-operators-h5v4d\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.912647 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbxb\" (UniqueName: \"kubernetes.io/projected/9bf24c61-16db-4593-ba56-3e13cf4b2320-kube-api-access-2mbxb\") pod \"certified-operators-h5v4d\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.913692 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-utilities\") pod \"certified-operators-h5v4d\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.914132 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-catalog-content\") pod \"certified-operators-h5v4d\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:35 crc kubenswrapper[4874]: I0122 12:12:35.952004 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbxb\" (UniqueName: \"kubernetes.io/projected/9bf24c61-16db-4593-ba56-3e13cf4b2320-kube-api-access-2mbxb\") pod \"certified-operators-h5v4d\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:36 crc kubenswrapper[4874]: I0122 12:12:36.087822 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:36 crc kubenswrapper[4874]: I0122 12:12:36.344075 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h5v4d"] Jan 22 12:12:36 crc kubenswrapper[4874]: I0122 12:12:36.479503 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5v4d" event={"ID":"9bf24c61-16db-4593-ba56-3e13cf4b2320","Type":"ContainerStarted","Data":"a35d385e2fc22970f70e33cd6e3ffc93c10baa2c9a892e4f5c5d067046cfdb4f"} Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.490442 4874 generic.go:334] "Generic (PLEG): container finished" podID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerID="9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9" exitCode=0 Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.490498 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5v4d" event={"ID":"9bf24c61-16db-4593-ba56-3e13cf4b2320","Type":"ContainerDied","Data":"9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9"} Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.492146 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.575088 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9ftt2"] Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.577540 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.588702 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ftt2"] Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.644253 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-catalog-content\") pod \"redhat-operators-9ftt2\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.644321 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f54p\" (UniqueName: \"kubernetes.io/projected/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-kube-api-access-5f54p\") pod \"redhat-operators-9ftt2\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.644361 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-utilities\") pod \"redhat-operators-9ftt2\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.745419 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-catalog-content\") pod \"redhat-operators-9ftt2\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.745499 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f54p\" (UniqueName: \"kubernetes.io/projected/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-kube-api-access-5f54p\") pod \"redhat-operators-9ftt2\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.745540 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-utilities\") pod \"redhat-operators-9ftt2\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.746195 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-catalog-content\") pod \"redhat-operators-9ftt2\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.746323 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-utilities\") pod \"redhat-operators-9ftt2\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.771973 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f54p\" (UniqueName: \"kubernetes.io/projected/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-kube-api-access-5f54p\") pod \"redhat-operators-9ftt2\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:37 crc kubenswrapper[4874]: I0122 12:12:37.901941 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:38 crc kubenswrapper[4874]: I0122 12:12:38.339106 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ftt2"] Jan 22 12:12:38 crc kubenswrapper[4874]: W0122 12:12:38.348798 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64ecc5d7_a8b0_4e2b_9520_f74ef155f54a.slice/crio-305cd87b6ec882e88fb40a1ac4fd939e5ee3a05f35521146bfa61373338404dc WatchSource:0}: Error finding container 305cd87b6ec882e88fb40a1ac4fd939e5ee3a05f35521146bfa61373338404dc: Status 404 returned error can't find the container with id 305cd87b6ec882e88fb40a1ac4fd939e5ee3a05f35521146bfa61373338404dc Jan 22 12:12:38 crc kubenswrapper[4874]: I0122 12:12:38.499238 4874 generic.go:334] "Generic (PLEG): container finished" podID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerID="f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467" exitCode=0 Jan 22 12:12:38 crc kubenswrapper[4874]: I0122 12:12:38.499312 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5v4d" event={"ID":"9bf24c61-16db-4593-ba56-3e13cf4b2320","Type":"ContainerDied","Data":"f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467"} Jan 22 12:12:38 crc kubenswrapper[4874]: I0122 12:12:38.501105 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ftt2" event={"ID":"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a","Type":"ContainerStarted","Data":"ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496"} Jan 22 12:12:38 crc kubenswrapper[4874]: I0122 12:12:38.501142 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ftt2" event={"ID":"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a","Type":"ContainerStarted","Data":"305cd87b6ec882e88fb40a1ac4fd939e5ee3a05f35521146bfa61373338404dc"} Jan 22 12:12:39 crc kubenswrapper[4874]: I0122 12:12:39.521127 4874 generic.go:334] "Generic (PLEG): container finished" podID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerID="ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496" exitCode=0 Jan 22 12:12:39 crc kubenswrapper[4874]: I0122 12:12:39.521160 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ftt2" event={"ID":"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a","Type":"ContainerDied","Data":"ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496"} Jan 22 12:12:39 crc kubenswrapper[4874]: I0122 12:12:39.521549 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ftt2" event={"ID":"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a","Type":"ContainerStarted","Data":"c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285"} Jan 22 12:12:39 crc kubenswrapper[4874]: I0122 12:12:39.533568 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5v4d" event={"ID":"9bf24c61-16db-4593-ba56-3e13cf4b2320","Type":"ContainerStarted","Data":"ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442"} Jan 22 12:12:39 crc kubenswrapper[4874]: I0122 12:12:39.565249 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h5v4d" podStartSLOduration=3.192098301 podStartE2EDuration="4.565225182s" podCreationTimestamp="2026-01-22 12:12:35 +0000 UTC" firstStartedPulling="2026-01-22 12:12:37.491900233 +0000 UTC m=+1931.336971313" lastFinishedPulling="2026-01-22 12:12:38.865027114 +0000 UTC m=+1932.710098194" observedRunningTime="2026-01-22 12:12:39.561257119 +0000 UTC m=+1933.406328209" watchObservedRunningTime="2026-01-22 12:12:39.565225182 +0000 UTC m=+1933.410296292" Jan 22 12:12:43 crc kubenswrapper[4874]: I0122 12:12:43.574695 4874 generic.go:334] "Generic (PLEG): container finished" podID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerID="c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285" exitCode=0 Jan 22 12:12:43 crc kubenswrapper[4874]: I0122 12:12:43.574738 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ftt2" event={"ID":"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a","Type":"ContainerDied","Data":"c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285"} Jan 22 12:12:44 crc kubenswrapper[4874]: I0122 12:12:44.586571 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ftt2" event={"ID":"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a","Type":"ContainerStarted","Data":"c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a"} Jan 22 12:12:44 crc kubenswrapper[4874]: I0122 12:12:44.618523 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9ftt2" podStartSLOduration=1.781040654 podStartE2EDuration="7.618505431s" podCreationTimestamp="2026-01-22 12:12:37 +0000 UTC" firstStartedPulling="2026-01-22 12:12:38.504635547 +0000 UTC m=+1932.349706617" lastFinishedPulling="2026-01-22 12:12:44.342100324 +0000 UTC m=+1938.187171394" observedRunningTime="2026-01-22 12:12:44.614556938 +0000 UTC m=+1938.459628058" watchObservedRunningTime="2026-01-22 12:12:44.618505431 +0000 UTC m=+1938.463576511" Jan 22 12:12:46 crc kubenswrapper[4874]: I0122 12:12:46.088265 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:46 crc kubenswrapper[4874]: I0122 12:12:46.088361 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:46 crc kubenswrapper[4874]: I0122 12:12:46.164356 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:46 crc kubenswrapper[4874]: I0122 12:12:46.673258 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:46 crc kubenswrapper[4874]: I0122 12:12:46.927908 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h5v4d"] Jan 22 12:12:47 crc kubenswrapper[4874]: I0122 12:12:47.902849 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:47 crc kubenswrapper[4874]: I0122 12:12:47.902923 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:48 crc kubenswrapper[4874]: I0122 12:12:48.617956 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h5v4d" podUID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerName="registry-server" containerID="cri-o://ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442" gracePeriod=2 Jan 22 12:12:48 crc kubenswrapper[4874]: I0122 12:12:48.961318 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9ftt2" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerName="registry-server" probeResult="failure" output=< Jan 22 12:12:48 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Jan 22 12:12:48 crc kubenswrapper[4874]: > Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.050646 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.129507 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mbxb\" (UniqueName: \"kubernetes.io/projected/9bf24c61-16db-4593-ba56-3e13cf4b2320-kube-api-access-2mbxb\") pod \"9bf24c61-16db-4593-ba56-3e13cf4b2320\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.129570 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-utilities\") pod \"9bf24c61-16db-4593-ba56-3e13cf4b2320\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.129670 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-catalog-content\") pod \"9bf24c61-16db-4593-ba56-3e13cf4b2320\" (UID: \"9bf24c61-16db-4593-ba56-3e13cf4b2320\") " Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.130913 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-utilities" (OuterVolumeSpecName: "utilities") pod "9bf24c61-16db-4593-ba56-3e13cf4b2320" (UID: "9bf24c61-16db-4593-ba56-3e13cf4b2320"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.150889 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf24c61-16db-4593-ba56-3e13cf4b2320-kube-api-access-2mbxb" (OuterVolumeSpecName: "kube-api-access-2mbxb") pod "9bf24c61-16db-4593-ba56-3e13cf4b2320" (UID: "9bf24c61-16db-4593-ba56-3e13cf4b2320"). InnerVolumeSpecName "kube-api-access-2mbxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.180156 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bf24c61-16db-4593-ba56-3e13cf4b2320" (UID: "9bf24c61-16db-4593-ba56-3e13cf4b2320"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.231462 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mbxb\" (UniqueName: \"kubernetes.io/projected/9bf24c61-16db-4593-ba56-3e13cf4b2320-kube-api-access-2mbxb\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.231505 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.231517 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bf24c61-16db-4593-ba56-3e13cf4b2320-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.626111 4874 generic.go:334] "Generic (PLEG): container finished" podID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerID="ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442" exitCode=0 Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.626156 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5v4d" event={"ID":"9bf24c61-16db-4593-ba56-3e13cf4b2320","Type":"ContainerDied","Data":"ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442"} Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.626186 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h5v4d" event={"ID":"9bf24c61-16db-4593-ba56-3e13cf4b2320","Type":"ContainerDied","Data":"a35d385e2fc22970f70e33cd6e3ffc93c10baa2c9a892e4f5c5d067046cfdb4f"} Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.626206 4874 scope.go:117] "RemoveContainer" containerID="ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.626362 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h5v4d" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.654432 4874 scope.go:117] "RemoveContainer" containerID="f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.665575 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h5v4d"] Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.670646 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h5v4d"] Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.683497 4874 scope.go:117] "RemoveContainer" containerID="9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.709064 4874 scope.go:117] "RemoveContainer" containerID="ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442" Jan 22 12:12:49 crc kubenswrapper[4874]: E0122 12:12:49.709545 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442\": container with ID starting with ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442 not found: ID does not exist" containerID="ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.709610 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442"} err="failed to get container status \"ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442\": rpc error: code = NotFound desc = could not find container \"ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442\": container with ID starting with ac751bd5d37eaf7b3f40e2da3e07f8429036c76a792a36f2be475b0a48fa4442 not found: ID does not exist" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.711142 4874 scope.go:117] "RemoveContainer" containerID="f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467" Jan 22 12:12:49 crc kubenswrapper[4874]: E0122 12:12:49.713553 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467\": container with ID starting with f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467 not found: ID does not exist" containerID="f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.713631 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467"} err="failed to get container status \"f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467\": rpc error: code = NotFound desc = could not find container \"f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467\": container with ID starting with f6de90acdaad3f5455174ad4b6c2480f83ce787bdf0eeb279fa155ce857ef467 not found: ID does not exist" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.713660 4874 scope.go:117] "RemoveContainer" containerID="9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9" Jan 22 12:12:49 crc kubenswrapper[4874]: E0122 12:12:49.714075 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9\": container with ID starting with 9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9 not found: ID does not exist" containerID="9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9" Jan 22 12:12:49 crc kubenswrapper[4874]: I0122 12:12:49.714160 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9"} err="failed to get container status \"9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9\": rpc error: code = NotFound desc = could not find container \"9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9\": container with ID starting with 9d0709b48d1811b8c84b55687b33409b330c566b7d4ee4ab5f7927d6bece36d9 not found: ID does not exist" Jan 22 12:12:50 crc kubenswrapper[4874]: I0122 12:12:50.728952 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bf24c61-16db-4593-ba56-3e13cf4b2320" path="/var/lib/kubelet/pods/9bf24c61-16db-4593-ba56-3e13cf4b2320/volumes" Jan 22 12:12:57 crc kubenswrapper[4874]: I0122 12:12:57.983184 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:58 crc kubenswrapper[4874]: I0122 12:12:58.053051 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:12:58 crc kubenswrapper[4874]: I0122 12:12:58.443887 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ftt2"] Jan 22 12:12:59 crc kubenswrapper[4874]: I0122 12:12:59.724735 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9ftt2" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerName="registry-server" containerID="cri-o://c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a" gracePeriod=2 Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.164355 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.230186 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-utilities\") pod \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.230288 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f54p\" (UniqueName: \"kubernetes.io/projected/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-kube-api-access-5f54p\") pod \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.230317 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-catalog-content\") pod \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\" (UID: \"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a\") " Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.231273 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-utilities" (OuterVolumeSpecName: "utilities") pod "64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" (UID: "64ecc5d7-a8b0-4e2b-9520-f74ef155f54a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.250415 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-kube-api-access-5f54p" (OuterVolumeSpecName: "kube-api-access-5f54p") pod "64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" (UID: "64ecc5d7-a8b0-4e2b-9520-f74ef155f54a"). InnerVolumeSpecName "kube-api-access-5f54p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.331509 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.331549 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f54p\" (UniqueName: \"kubernetes.io/projected/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-kube-api-access-5f54p\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.348734 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" (UID: "64ecc5d7-a8b0-4e2b-9520-f74ef155f54a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.432463 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.758429 4874 generic.go:334] "Generic (PLEG): container finished" podID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerID="c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a" exitCode=0 Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.758756 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ftt2" event={"ID":"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a","Type":"ContainerDied","Data":"c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a"} Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.758799 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ftt2" event={"ID":"64ecc5d7-a8b0-4e2b-9520-f74ef155f54a","Type":"ContainerDied","Data":"305cd87b6ec882e88fb40a1ac4fd939e5ee3a05f35521146bfa61373338404dc"} Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.758827 4874 scope.go:117] "RemoveContainer" containerID="c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.759066 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ftt2" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.801859 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ftt2"] Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.803444 4874 scope.go:117] "RemoveContainer" containerID="c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.811173 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9ftt2"] Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.833131 4874 scope.go:117] "RemoveContainer" containerID="ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.860312 4874 scope.go:117] "RemoveContainer" containerID="c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a" Jan 22 12:13:00 crc kubenswrapper[4874]: E0122 12:13:00.861170 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a\": container with ID starting with c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a not found: ID does not exist" containerID="c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.861213 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a"} err="failed to get container status \"c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a\": rpc error: code = NotFound desc = could not find container \"c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a\": container with ID starting with c86e8e87b6b18703131d3c91af4e04cccee578cc740be51fa2f7e27d62b9954a not found: ID does not exist" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.861239 4874 scope.go:117] "RemoveContainer" containerID="c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285" Jan 22 12:13:00 crc kubenswrapper[4874]: E0122 12:13:00.861877 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285\": container with ID starting with c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285 not found: ID does not exist" containerID="c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.861979 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285"} err="failed to get container status \"c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285\": rpc error: code = NotFound desc = could not find container \"c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285\": container with ID starting with c741cf9cde1c4861c16f849da921132c17a3beb19a123d25322d71547cf13285 not found: ID does not exist" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.862129 4874 scope.go:117] "RemoveContainer" containerID="ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496" Jan 22 12:13:00 crc kubenswrapper[4874]: E0122 12:13:00.862441 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496\": container with ID starting with ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496 not found: ID does not exist" containerID="ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496" Jan 22 12:13:00 crc kubenswrapper[4874]: I0122 12:13:00.862468 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496"} err="failed to get container status \"ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496\": rpc error: code = NotFound desc = could not find container \"ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496\": container with ID starting with ad5a3be4a4bb4d5d1a756d0a99a798f0e7b745ab428423fc92215233ed167496 not found: ID does not exist" Jan 22 12:13:02 crc kubenswrapper[4874]: I0122 12:13:02.732652 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" path="/var/lib/kubelet/pods/64ecc5d7-a8b0-4e2b-9520-f74ef155f54a/volumes" Jan 22 12:14:13 crc kubenswrapper[4874]: I0122 12:14:13.520286 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:14:13 crc kubenswrapper[4874]: I0122 12:14:13.522570 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:14:43 crc kubenswrapper[4874]: I0122 12:14:43.520743 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:14:43 crc kubenswrapper[4874]: I0122 12:14:43.521472 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.156023 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt"] Jan 22 12:15:00 crc kubenswrapper[4874]: E0122 12:15:00.157044 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerName="extract-utilities" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.157069 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerName="extract-utilities" Jan 22 12:15:00 crc kubenswrapper[4874]: E0122 12:15:00.157091 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerName="registry-server" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.157106 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerName="registry-server" Jan 22 12:15:00 crc kubenswrapper[4874]: E0122 12:15:00.157147 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerName="extract-content" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.157161 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerName="extract-content" Jan 22 12:15:00 crc kubenswrapper[4874]: E0122 12:15:00.157180 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerName="extract-content" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.157192 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerName="extract-content" Jan 22 12:15:00 crc kubenswrapper[4874]: E0122 12:15:00.157210 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerName="registry-server" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.157223 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerName="registry-server" Jan 22 12:15:00 crc kubenswrapper[4874]: E0122 12:15:00.157242 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerName="extract-utilities" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.157254 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerName="extract-utilities" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.157487 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf24c61-16db-4593-ba56-3e13cf4b2320" containerName="registry-server" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.157525 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ecc5d7-a8b0-4e2b-9520-f74ef155f54a" containerName="registry-server" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.158269 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.163223 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.166237 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.179994 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt"] Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.214956 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-secret-volume\") pod \"collect-profiles-29484735-jrjzt\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.215008 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-config-volume\") pod \"collect-profiles-29484735-jrjzt\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.215079 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmrd\" (UniqueName: \"kubernetes.io/projected/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-kube-api-access-tlmrd\") pod \"collect-profiles-29484735-jrjzt\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.316038 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-secret-volume\") pod \"collect-profiles-29484735-jrjzt\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.316087 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-config-volume\") pod \"collect-profiles-29484735-jrjzt\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.316126 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlmrd\" (UniqueName: \"kubernetes.io/projected/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-kube-api-access-tlmrd\") pod \"collect-profiles-29484735-jrjzt\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.317062 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-config-volume\") pod \"collect-profiles-29484735-jrjzt\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.337002 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-secret-volume\") pod \"collect-profiles-29484735-jrjzt\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.337006 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlmrd\" (UniqueName: \"kubernetes.io/projected/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-kube-api-access-tlmrd\") pod \"collect-profiles-29484735-jrjzt\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.477593 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:00 crc kubenswrapper[4874]: I0122 12:15:00.920461 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt"] Jan 22 12:15:01 crc kubenswrapper[4874]: I0122 12:15:01.930073 4874 generic.go:334] "Generic (PLEG): container finished" podID="1b771c8b-4e94-4d20-ac6b-72c4798a65cb" containerID="afe34a02bbd93802a374e833960c467dfba9da2664b1e4cf5e3b189ded4d6c1d" exitCode=0 Jan 22 12:15:01 crc kubenswrapper[4874]: I0122 12:15:01.930284 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" event={"ID":"1b771c8b-4e94-4d20-ac6b-72c4798a65cb","Type":"ContainerDied","Data":"afe34a02bbd93802a374e833960c467dfba9da2664b1e4cf5e3b189ded4d6c1d"} Jan 22 12:15:01 crc kubenswrapper[4874]: I0122 12:15:01.930309 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" event={"ID":"1b771c8b-4e94-4d20-ac6b-72c4798a65cb","Type":"ContainerStarted","Data":"e90b3e2345007d5f042662e62d34e7b1bedee54ad7ec531e5233bbfae279d163"} Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.222972 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.373973 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-config-volume\") pod \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.374071 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlmrd\" (UniqueName: \"kubernetes.io/projected/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-kube-api-access-tlmrd\") pod \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.374280 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-secret-volume\") pod \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\" (UID: \"1b771c8b-4e94-4d20-ac6b-72c4798a65cb\") " Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.375014 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "1b771c8b-4e94-4d20-ac6b-72c4798a65cb" (UID: "1b771c8b-4e94-4d20-ac6b-72c4798a65cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.376530 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.380538 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1b771c8b-4e94-4d20-ac6b-72c4798a65cb" (UID: "1b771c8b-4e94-4d20-ac6b-72c4798a65cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.381093 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-kube-api-access-tlmrd" (OuterVolumeSpecName: "kube-api-access-tlmrd") pod "1b771c8b-4e94-4d20-ac6b-72c4798a65cb" (UID: "1b771c8b-4e94-4d20-ac6b-72c4798a65cb"). InnerVolumeSpecName "kube-api-access-tlmrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.477694 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.477755 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlmrd\" (UniqueName: \"kubernetes.io/projected/1b771c8b-4e94-4d20-ac6b-72c4798a65cb-kube-api-access-tlmrd\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.959390 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" event={"ID":"1b771c8b-4e94-4d20-ac6b-72c4798a65cb","Type":"ContainerDied","Data":"e90b3e2345007d5f042662e62d34e7b1bedee54ad7ec531e5233bbfae279d163"} Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.959450 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90b3e2345007d5f042662e62d34e7b1bedee54ad7ec531e5233bbfae279d163" Jan 22 12:15:03 crc kubenswrapper[4874]: I0122 12:15:03.959470 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-jrjzt" Jan 22 12:15:04 crc kubenswrapper[4874]: I0122 12:15:04.294268 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx"] Jan 22 12:15:04 crc kubenswrapper[4874]: I0122 12:15:04.305885 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484690-npncx"] Jan 22 12:15:04 crc kubenswrapper[4874]: I0122 12:15:04.725079 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a15050a-7cbd-40f6-a656-a68293c0878a" path="/var/lib/kubelet/pods/0a15050a-7cbd-40f6-a656-a68293c0878a/volumes" Jan 22 12:15:13 crc kubenswrapper[4874]: I0122 12:15:13.520641 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:15:13 crc kubenswrapper[4874]: I0122 12:15:13.522424 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:15:13 crc kubenswrapper[4874]: I0122 12:15:13.522621 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:15:13 crc kubenswrapper[4874]: I0122 12:15:13.523459 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4bd591f21c90200802db0c3a40e96e2477cc7ef98ac950449bf351a75f78ca92"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:15:13 crc kubenswrapper[4874]: I0122 12:15:13.523634 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://4bd591f21c90200802db0c3a40e96e2477cc7ef98ac950449bf351a75f78ca92" gracePeriod=600 Jan 22 12:15:14 crc kubenswrapper[4874]: I0122 12:15:14.042356 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="4bd591f21c90200802db0c3a40e96e2477cc7ef98ac950449bf351a75f78ca92" exitCode=0 Jan 22 12:15:14 crc kubenswrapper[4874]: I0122 12:15:14.042425 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"4bd591f21c90200802db0c3a40e96e2477cc7ef98ac950449bf351a75f78ca92"} Jan 22 12:15:14 crc kubenswrapper[4874]: I0122 12:15:14.043120 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646"} Jan 22 12:15:14 crc kubenswrapper[4874]: I0122 12:15:14.043166 4874 scope.go:117] "RemoveContainer" containerID="9e9d8998fedf3ebcd297ebcad326b3f27048547f0407cb9bc7f2e574679a4b15" Jan 22 12:15:33 crc kubenswrapper[4874]: I0122 12:15:33.324230 4874 scope.go:117] "RemoveContainer" containerID="059406f38b7eefe033761eb54f07a9cd0f802e07f88ff68ccbdb582fae9d8d28" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.197847 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sgqzt"] Jan 22 12:16:19 crc kubenswrapper[4874]: E0122 12:16:19.198863 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b771c8b-4e94-4d20-ac6b-72c4798a65cb" containerName="collect-profiles" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.198885 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b771c8b-4e94-4d20-ac6b-72c4798a65cb" containerName="collect-profiles" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.199144 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b771c8b-4e94-4d20-ac6b-72c4798a65cb" containerName="collect-profiles" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.200686 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.208644 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sgqzt"] Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.289004 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-utilities\") pod \"community-operators-sgqzt\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.289170 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-catalog-content\") pod \"community-operators-sgqzt\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.289385 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzgsk\" (UniqueName: \"kubernetes.io/projected/d598bb98-3008-4081-8187-bbd8e622bb3a-kube-api-access-xzgsk\") pod \"community-operators-sgqzt\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.391260 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-catalog-content\") pod \"community-operators-sgqzt\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.391606 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzgsk\" (UniqueName: \"kubernetes.io/projected/d598bb98-3008-4081-8187-bbd8e622bb3a-kube-api-access-xzgsk\") pod \"community-operators-sgqzt\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.391674 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-utilities\") pod \"community-operators-sgqzt\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.392208 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-catalog-content\") pod \"community-operators-sgqzt\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.392608 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-utilities\") pod \"community-operators-sgqzt\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.412687 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzgsk\" (UniqueName: \"kubernetes.io/projected/d598bb98-3008-4081-8187-bbd8e622bb3a-kube-api-access-xzgsk\") pod \"community-operators-sgqzt\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:19 crc kubenswrapper[4874]: I0122 12:16:19.536120 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:20 crc kubenswrapper[4874]: I0122 12:16:20.084884 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sgqzt"] Jan 22 12:16:20 crc kubenswrapper[4874]: W0122 12:16:20.089539 4874 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd598bb98_3008_4081_8187_bbd8e622bb3a.slice/crio-095c3d0baf601af83872c20e63e2bb7f4dd45902aba137ead9517326a397037a WatchSource:0}: Error finding container 095c3d0baf601af83872c20e63e2bb7f4dd45902aba137ead9517326a397037a: Status 404 returned error can't find the container with id 095c3d0baf601af83872c20e63e2bb7f4dd45902aba137ead9517326a397037a Jan 22 12:16:20 crc kubenswrapper[4874]: I0122 12:16:20.609001 4874 generic.go:334] "Generic (PLEG): container finished" podID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerID="31aa3e5fdae92637dce6ea824242e7b91959d9977ef5dd71d07c2a22cad8d5de" exitCode=0 Jan 22 12:16:20 crc kubenswrapper[4874]: I0122 12:16:20.609038 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgqzt" event={"ID":"d598bb98-3008-4081-8187-bbd8e622bb3a","Type":"ContainerDied","Data":"31aa3e5fdae92637dce6ea824242e7b91959d9977ef5dd71d07c2a22cad8d5de"} Jan 22 12:16:20 crc kubenswrapper[4874]: I0122 12:16:20.609063 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgqzt" event={"ID":"d598bb98-3008-4081-8187-bbd8e622bb3a","Type":"ContainerStarted","Data":"095c3d0baf601af83872c20e63e2bb7f4dd45902aba137ead9517326a397037a"} Jan 22 12:16:22 crc kubenswrapper[4874]: I0122 12:16:22.633622 4874 generic.go:334] "Generic (PLEG): container finished" podID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerID="5f7d8ee5429b88b106f7fa3dbfe8f691f3605c7e687b8b5b69bea2d22edbf30c" exitCode=0 Jan 22 12:16:22 crc kubenswrapper[4874]: I0122 12:16:22.634289 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgqzt" event={"ID":"d598bb98-3008-4081-8187-bbd8e622bb3a","Type":"ContainerDied","Data":"5f7d8ee5429b88b106f7fa3dbfe8f691f3605c7e687b8b5b69bea2d22edbf30c"} Jan 22 12:16:23 crc kubenswrapper[4874]: I0122 12:16:23.653869 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgqzt" event={"ID":"d598bb98-3008-4081-8187-bbd8e622bb3a","Type":"ContainerStarted","Data":"59d9b526221eb4c75719e1f69ba088b18a62ae3391366f245f2d3a8aab22ec4e"} Jan 22 12:16:23 crc kubenswrapper[4874]: I0122 12:16:23.698136 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sgqzt" podStartSLOduration=2.266234752 podStartE2EDuration="4.698110735s" podCreationTimestamp="2026-01-22 12:16:19 +0000 UTC" firstStartedPulling="2026-01-22 12:16:20.611941165 +0000 UTC m=+2154.457012235" lastFinishedPulling="2026-01-22 12:16:23.043817138 +0000 UTC m=+2156.888888218" observedRunningTime="2026-01-22 12:16:23.693789252 +0000 UTC m=+2157.538860322" watchObservedRunningTime="2026-01-22 12:16:23.698110735 +0000 UTC m=+2157.543181815" Jan 22 12:16:29 crc kubenswrapper[4874]: I0122 12:16:29.536466 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:29 crc kubenswrapper[4874]: I0122 12:16:29.537202 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:29 crc kubenswrapper[4874]: I0122 12:16:29.598637 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:29 crc kubenswrapper[4874]: I0122 12:16:29.775376 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:31 crc kubenswrapper[4874]: I0122 12:16:31.973588 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sgqzt"] Jan 22 12:16:31 crc kubenswrapper[4874]: I0122 12:16:31.974172 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sgqzt" podUID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerName="registry-server" containerID="cri-o://59d9b526221eb4c75719e1f69ba088b18a62ae3391366f245f2d3a8aab22ec4e" gracePeriod=2 Jan 22 12:16:32 crc kubenswrapper[4874]: I0122 12:16:32.756963 4874 generic.go:334] "Generic (PLEG): container finished" podID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerID="59d9b526221eb4c75719e1f69ba088b18a62ae3391366f245f2d3a8aab22ec4e" exitCode=0 Jan 22 12:16:32 crc kubenswrapper[4874]: I0122 12:16:32.757251 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgqzt" event={"ID":"d598bb98-3008-4081-8187-bbd8e622bb3a","Type":"ContainerDied","Data":"59d9b526221eb4c75719e1f69ba088b18a62ae3391366f245f2d3a8aab22ec4e"} Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.558633 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.675641 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-utilities\") pod \"d598bb98-3008-4081-8187-bbd8e622bb3a\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.675753 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzgsk\" (UniqueName: \"kubernetes.io/projected/d598bb98-3008-4081-8187-bbd8e622bb3a-kube-api-access-xzgsk\") pod \"d598bb98-3008-4081-8187-bbd8e622bb3a\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.676065 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-catalog-content\") pod \"d598bb98-3008-4081-8187-bbd8e622bb3a\" (UID: \"d598bb98-3008-4081-8187-bbd8e622bb3a\") " Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.677433 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-utilities" (OuterVolumeSpecName: "utilities") pod "d598bb98-3008-4081-8187-bbd8e622bb3a" (UID: "d598bb98-3008-4081-8187-bbd8e622bb3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.704975 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d598bb98-3008-4081-8187-bbd8e622bb3a-kube-api-access-xzgsk" (OuterVolumeSpecName: "kube-api-access-xzgsk") pod "d598bb98-3008-4081-8187-bbd8e622bb3a" (UID: "d598bb98-3008-4081-8187-bbd8e622bb3a"). InnerVolumeSpecName "kube-api-access-xzgsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.740531 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d598bb98-3008-4081-8187-bbd8e622bb3a" (UID: "d598bb98-3008-4081-8187-bbd8e622bb3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.770534 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sgqzt" event={"ID":"d598bb98-3008-4081-8187-bbd8e622bb3a","Type":"ContainerDied","Data":"095c3d0baf601af83872c20e63e2bb7f4dd45902aba137ead9517326a397037a"} Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.770589 4874 scope.go:117] "RemoveContainer" containerID="59d9b526221eb4c75719e1f69ba088b18a62ae3391366f245f2d3a8aab22ec4e" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.770759 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sgqzt" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.778673 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.778721 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d598bb98-3008-4081-8187-bbd8e622bb3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.778742 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzgsk\" (UniqueName: \"kubernetes.io/projected/d598bb98-3008-4081-8187-bbd8e622bb3a-kube-api-access-xzgsk\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.788498 4874 scope.go:117] "RemoveContainer" containerID="5f7d8ee5429b88b106f7fa3dbfe8f691f3605c7e687b8b5b69bea2d22edbf30c" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.809744 4874 scope.go:117] "RemoveContainer" containerID="31aa3e5fdae92637dce6ea824242e7b91959d9977ef5dd71d07c2a22cad8d5de" Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.873152 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sgqzt"] Jan 22 12:16:33 crc kubenswrapper[4874]: I0122 12:16:33.876937 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sgqzt"] Jan 22 12:16:34 crc kubenswrapper[4874]: I0122 12:16:34.729893 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d598bb98-3008-4081-8187-bbd8e622bb3a" path="/var/lib/kubelet/pods/d598bb98-3008-4081-8187-bbd8e622bb3a/volumes" Jan 22 12:17:13 crc kubenswrapper[4874]: I0122 12:17:13.520559 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:17:13 crc kubenswrapper[4874]: I0122 12:17:13.521174 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:17:43 crc kubenswrapper[4874]: I0122 12:17:43.521083 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:17:43 crc kubenswrapper[4874]: I0122 12:17:43.521757 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:18:13 crc kubenswrapper[4874]: I0122 12:18:13.520342 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:18:13 crc kubenswrapper[4874]: I0122 12:18:13.521040 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:18:13 crc kubenswrapper[4874]: I0122 12:18:13.521092 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:18:13 crc kubenswrapper[4874]: I0122 12:18:13.521748 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:18:13 crc kubenswrapper[4874]: I0122 12:18:13.521815 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" gracePeriod=600 Jan 22 12:18:13 crc kubenswrapper[4874]: E0122 12:18:13.653738 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:18:13 crc kubenswrapper[4874]: I0122 12:18:13.707523 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" exitCode=0 Jan 22 12:18:13 crc kubenswrapper[4874]: I0122 12:18:13.707581 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646"} Jan 22 12:18:13 crc kubenswrapper[4874]: I0122 12:18:13.707628 4874 scope.go:117] "RemoveContainer" containerID="4bd591f21c90200802db0c3a40e96e2477cc7ef98ac950449bf351a75f78ca92" Jan 22 12:18:13 crc kubenswrapper[4874]: I0122 12:18:13.710588 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:18:13 crc kubenswrapper[4874]: E0122 12:18:13.711095 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:18:27 crc kubenswrapper[4874]: I0122 12:18:27.716072 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:18:27 crc kubenswrapper[4874]: E0122 12:18:27.717285 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:18:39 crc kubenswrapper[4874]: I0122 12:18:39.717164 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:18:39 crc kubenswrapper[4874]: E0122 12:18:39.718512 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:18:54 crc kubenswrapper[4874]: I0122 12:18:54.723178 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:18:54 crc kubenswrapper[4874]: E0122 12:18:54.724208 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:19:06 crc kubenswrapper[4874]: I0122 12:19:06.726843 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:19:06 crc kubenswrapper[4874]: E0122 12:19:06.728099 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:19:18 crc kubenswrapper[4874]: I0122 12:19:18.716327 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:19:18 crc kubenswrapper[4874]: E0122 12:19:18.717504 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:19:29 crc kubenswrapper[4874]: I0122 12:19:29.716531 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:19:29 crc kubenswrapper[4874]: E0122 12:19:29.717242 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:19:42 crc kubenswrapper[4874]: I0122 12:19:42.716512 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:19:42 crc kubenswrapper[4874]: E0122 12:19:42.717484 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:19:57 crc kubenswrapper[4874]: I0122 12:19:57.716312 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:19:57 crc kubenswrapper[4874]: E0122 12:19:57.717466 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:20:10 crc kubenswrapper[4874]: I0122 12:20:10.716298 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:20:10 crc kubenswrapper[4874]: E0122 12:20:10.717596 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:20:21 crc kubenswrapper[4874]: I0122 12:20:21.718199 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:20:21 crc kubenswrapper[4874]: E0122 12:20:21.718885 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:20:34 crc kubenswrapper[4874]: I0122 12:20:34.716230 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:20:34 crc kubenswrapper[4874]: E0122 12:20:34.717119 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:20:46 crc kubenswrapper[4874]: I0122 12:20:46.724653 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:20:46 crc kubenswrapper[4874]: E0122 12:20:46.725667 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:20:59 crc kubenswrapper[4874]: I0122 12:20:59.716576 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:20:59 crc kubenswrapper[4874]: E0122 12:20:59.719392 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:21:12 crc kubenswrapper[4874]: I0122 12:21:12.716830 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:21:12 crc kubenswrapper[4874]: E0122 12:21:12.717763 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:21:27 crc kubenswrapper[4874]: I0122 12:21:27.716296 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:21:27 crc kubenswrapper[4874]: E0122 12:21:27.718653 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:21:40 crc kubenswrapper[4874]: I0122 12:21:40.716283 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:21:40 crc kubenswrapper[4874]: E0122 12:21:40.718077 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:21:51 crc kubenswrapper[4874]: I0122 12:21:51.715934 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:21:51 crc kubenswrapper[4874]: E0122 12:21:51.716669 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:22:02 crc kubenswrapper[4874]: I0122 12:22:02.717289 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:22:02 crc kubenswrapper[4874]: E0122 12:22:02.718310 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:22:15 crc kubenswrapper[4874]: I0122 12:22:15.716273 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:22:15 crc kubenswrapper[4874]: E0122 12:22:15.716925 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:22:28 crc kubenswrapper[4874]: I0122 12:22:28.716825 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:22:28 crc kubenswrapper[4874]: E0122 12:22:28.717743 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:22:39 crc kubenswrapper[4874]: I0122 12:22:39.720933 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:22:39 crc kubenswrapper[4874]: E0122 12:22:39.721789 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:22:52 crc kubenswrapper[4874]: I0122 12:22:52.716661 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:22:52 crc kubenswrapper[4874]: E0122 12:22:52.717596 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.874279 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wmx9h"] Jan 22 12:22:53 crc kubenswrapper[4874]: E0122 12:22:53.875244 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerName="extract-utilities" Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.875279 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerName="extract-utilities" Jan 22 12:22:53 crc kubenswrapper[4874]: E0122 12:22:53.875302 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerName="extract-content" Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.875320 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerName="extract-content" Jan 22 12:22:53 crc kubenswrapper[4874]: E0122 12:22:53.875383 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerName="registry-server" Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.875428 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerName="registry-server" Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.875764 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d598bb98-3008-4081-8187-bbd8e622bb3a" containerName="registry-server" Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.878050 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.904649 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmx9h"] Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.968469 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wjjg\" (UniqueName: \"kubernetes.io/projected/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-kube-api-access-6wjjg\") pod \"certified-operators-wmx9h\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.968604 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-utilities\") pod \"certified-operators-wmx9h\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:53 crc kubenswrapper[4874]: I0122 12:22:53.968731 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-catalog-content\") pod \"certified-operators-wmx9h\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.070512 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wjjg\" (UniqueName: \"kubernetes.io/projected/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-kube-api-access-6wjjg\") pod \"certified-operators-wmx9h\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.070579 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-utilities\") pod \"certified-operators-wmx9h\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.070654 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-catalog-content\") pod \"certified-operators-wmx9h\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.071351 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-catalog-content\") pod \"certified-operators-wmx9h\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.072172 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-utilities\") pod \"certified-operators-wmx9h\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.096850 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wjjg\" (UniqueName: \"kubernetes.io/projected/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-kube-api-access-6wjjg\") pod \"certified-operators-wmx9h\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.199488 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.495949 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmx9h"] Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.998271 4874 generic.go:334] "Generic (PLEG): container finished" podID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerID="947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109" exitCode=0 Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.998324 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx9h" event={"ID":"d6f3e52c-c484-443a-aee8-bfc31e28c9b8","Type":"ContainerDied","Data":"947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109"} Jan 22 12:22:54 crc kubenswrapper[4874]: I0122 12:22:54.998359 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx9h" event={"ID":"d6f3e52c-c484-443a-aee8-bfc31e28c9b8","Type":"ContainerStarted","Data":"e43a9df2d62e043104820584dc48aa02e31ee1e3d15d3acd5111300752598cd9"} Jan 22 12:22:55 crc kubenswrapper[4874]: I0122 12:22:55.000802 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:22:56 crc kubenswrapper[4874]: I0122 12:22:56.011624 4874 generic.go:334] "Generic (PLEG): container finished" podID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerID="33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f" exitCode=0 Jan 22 12:22:56 crc kubenswrapper[4874]: I0122 12:22:56.011746 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx9h" event={"ID":"d6f3e52c-c484-443a-aee8-bfc31e28c9b8","Type":"ContainerDied","Data":"33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f"} Jan 22 12:22:57 crc kubenswrapper[4874]: I0122 12:22:57.023063 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx9h" event={"ID":"d6f3e52c-c484-443a-aee8-bfc31e28c9b8","Type":"ContainerStarted","Data":"2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe"} Jan 22 12:22:57 crc kubenswrapper[4874]: I0122 12:22:57.051440 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wmx9h" podStartSLOduration=2.632070432 podStartE2EDuration="4.051415472s" podCreationTimestamp="2026-01-22 12:22:53 +0000 UTC" firstStartedPulling="2026-01-22 12:22:55.000286607 +0000 UTC m=+2548.845357717" lastFinishedPulling="2026-01-22 12:22:56.419631647 +0000 UTC m=+2550.264702757" observedRunningTime="2026-01-22 12:22:57.048201762 +0000 UTC m=+2550.893272852" watchObservedRunningTime="2026-01-22 12:22:57.051415472 +0000 UTC m=+2550.896486562" Jan 22 12:23:04 crc kubenswrapper[4874]: I0122 12:23:04.200591 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:23:04 crc kubenswrapper[4874]: I0122 12:23:04.201456 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:23:04 crc kubenswrapper[4874]: I0122 12:23:04.274598 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:23:05 crc kubenswrapper[4874]: I0122 12:23:05.166672 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:23:05 crc kubenswrapper[4874]: I0122 12:23:05.270527 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmx9h"] Jan 22 12:23:07 crc kubenswrapper[4874]: I0122 12:23:07.121718 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wmx9h" podUID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerName="registry-server" containerID="cri-o://2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe" gracePeriod=2 Jan 22 12:23:07 crc kubenswrapper[4874]: I0122 12:23:07.716890 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:23:07 crc kubenswrapper[4874]: E0122 12:23:07.717826 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.092836 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.105933 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-utilities\") pod \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.106233 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wjjg\" (UniqueName: \"kubernetes.io/projected/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-kube-api-access-6wjjg\") pod \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.106281 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-catalog-content\") pod \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\" (UID: \"d6f3e52c-c484-443a-aee8-bfc31e28c9b8\") " Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.107242 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-utilities" (OuterVolumeSpecName: "utilities") pod "d6f3e52c-c484-443a-aee8-bfc31e28c9b8" (UID: "d6f3e52c-c484-443a-aee8-bfc31e28c9b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.116126 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-kube-api-access-6wjjg" (OuterVolumeSpecName: "kube-api-access-6wjjg") pod "d6f3e52c-c484-443a-aee8-bfc31e28c9b8" (UID: "d6f3e52c-c484-443a-aee8-bfc31e28c9b8"). InnerVolumeSpecName "kube-api-access-6wjjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.135236 4874 generic.go:334] "Generic (PLEG): container finished" podID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerID="2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe" exitCode=0 Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.135284 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx9h" event={"ID":"d6f3e52c-c484-443a-aee8-bfc31e28c9b8","Type":"ContainerDied","Data":"2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe"} Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.135313 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx9h" event={"ID":"d6f3e52c-c484-443a-aee8-bfc31e28c9b8","Type":"ContainerDied","Data":"e43a9df2d62e043104820584dc48aa02e31ee1e3d15d3acd5111300752598cd9"} Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.135330 4874 scope.go:117] "RemoveContainer" containerID="2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.135487 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx9h" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.160978 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6f3e52c-c484-443a-aee8-bfc31e28c9b8" (UID: "d6f3e52c-c484-443a-aee8-bfc31e28c9b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.168209 4874 scope.go:117] "RemoveContainer" containerID="33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.186239 4874 scope.go:117] "RemoveContainer" containerID="947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.208790 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wjjg\" (UniqueName: \"kubernetes.io/projected/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-kube-api-access-6wjjg\") on node \"crc\" DevicePath \"\"" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.208859 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.208896 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6f3e52c-c484-443a-aee8-bfc31e28c9b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.223315 4874 scope.go:117] "RemoveContainer" containerID="2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe" Jan 22 12:23:08 crc kubenswrapper[4874]: E0122 12:23:08.223860 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe\": container with ID starting with 2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe not found: ID does not exist" containerID="2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.223962 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe"} err="failed to get container status \"2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe\": rpc error: code = NotFound desc = could not find container \"2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe\": container with ID starting with 2c7a2b76b6152fe45c04a7b90cc5de979ff0bf8462fcdee1fcc8edc296fadafe not found: ID does not exist" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.224046 4874 scope.go:117] "RemoveContainer" containerID="33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f" Jan 22 12:23:08 crc kubenswrapper[4874]: E0122 12:23:08.224586 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f\": container with ID starting with 33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f not found: ID does not exist" containerID="33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.224648 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f"} err="failed to get container status \"33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f\": rpc error: code = NotFound desc = could not find container \"33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f\": container with ID starting with 33a21e73f13e2c4c680d2d6e9593fb5a6b498dcdb3dfac7b50ef86c6cca2642f not found: ID does not exist" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.224696 4874 scope.go:117] "RemoveContainer" containerID="947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109" Jan 22 12:23:08 crc kubenswrapper[4874]: E0122 12:23:08.225091 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109\": container with ID starting with 947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109 not found: ID does not exist" containerID="947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.225172 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109"} err="failed to get container status \"947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109\": rpc error: code = NotFound desc = could not find container \"947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109\": container with ID starting with 947367f2e70a6cdb0c481406ee1ca0c3c4d739abf78030cfbf933e595b9ca109 not found: ID does not exist" Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.503671 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmx9h"] Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.514062 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wmx9h"] Jan 22 12:23:08 crc kubenswrapper[4874]: I0122 12:23:08.728589 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" path="/var/lib/kubelet/pods/d6f3e52c-c484-443a-aee8-bfc31e28c9b8/volumes" Jan 22 12:23:22 crc kubenswrapper[4874]: I0122 12:23:22.716516 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:23:23 crc kubenswrapper[4874]: I0122 12:23:23.280418 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"63365a511b966484f39bb84f505e4f8a844bd95a40ffac8357b30eaa5bacd904"} Jan 22 12:25:43 crc kubenswrapper[4874]: I0122 12:25:43.520146 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:25:43 crc kubenswrapper[4874]: I0122 12:25:43.520802 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:25:57 crc kubenswrapper[4874]: I0122 12:25:57.906479 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cdztk"] Jan 22 12:25:57 crc kubenswrapper[4874]: E0122 12:25:57.907671 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerName="registry-server" Jan 22 12:25:57 crc kubenswrapper[4874]: I0122 12:25:57.907704 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerName="registry-server" Jan 22 12:25:57 crc kubenswrapper[4874]: E0122 12:25:57.907730 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerName="extract-utilities" Jan 22 12:25:57 crc kubenswrapper[4874]: I0122 12:25:57.907747 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerName="extract-utilities" Jan 22 12:25:57 crc kubenswrapper[4874]: E0122 12:25:57.907793 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerName="extract-content" Jan 22 12:25:57 crc kubenswrapper[4874]: I0122 12:25:57.907812 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerName="extract-content" Jan 22 12:25:57 crc kubenswrapper[4874]: I0122 12:25:57.908058 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f3e52c-c484-443a-aee8-bfc31e28c9b8" containerName="registry-server" Jan 22 12:25:57 crc kubenswrapper[4874]: I0122 12:25:57.910008 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:57 crc kubenswrapper[4874]: I0122 12:25:57.915202 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdztk"] Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.002090 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-utilities\") pod \"redhat-operators-cdztk\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.002230 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-catalog-content\") pod \"redhat-operators-cdztk\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.002279 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrrm\" (UniqueName: \"kubernetes.io/projected/b6dd15bf-729b-415a-835f-20a2d777fec0-kube-api-access-dxrrm\") pod \"redhat-operators-cdztk\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.103110 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-utilities\") pod \"redhat-operators-cdztk\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.103244 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-catalog-content\") pod \"redhat-operators-cdztk\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.103289 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrrm\" (UniqueName: \"kubernetes.io/projected/b6dd15bf-729b-415a-835f-20a2d777fec0-kube-api-access-dxrrm\") pod \"redhat-operators-cdztk\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.103568 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-utilities\") pod \"redhat-operators-cdztk\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.103699 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-catalog-content\") pod \"redhat-operators-cdztk\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.125357 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrrm\" (UniqueName: \"kubernetes.io/projected/b6dd15bf-729b-415a-835f-20a2d777fec0-kube-api-access-dxrrm\") pod \"redhat-operators-cdztk\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.238967 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.518695 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdztk"] Jan 22 12:25:58 crc kubenswrapper[4874]: I0122 12:25:58.669289 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdztk" event={"ID":"b6dd15bf-729b-415a-835f-20a2d777fec0","Type":"ContainerStarted","Data":"03ec6cb9b7fadd34d1fe75156c0aa5d424f939eecdfd48ef6b48f534c40ec2e0"} Jan 22 12:25:59 crc kubenswrapper[4874]: I0122 12:25:59.677928 4874 generic.go:334] "Generic (PLEG): container finished" podID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerID="3256cbd3318e852d8cc4290f2bb70ae12b509bf1d1903df37059cd9dca3f3ac5" exitCode=0 Jan 22 12:25:59 crc kubenswrapper[4874]: I0122 12:25:59.678039 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdztk" event={"ID":"b6dd15bf-729b-415a-835f-20a2d777fec0","Type":"ContainerDied","Data":"3256cbd3318e852d8cc4290f2bb70ae12b509bf1d1903df37059cd9dca3f3ac5"} Jan 22 12:26:00 crc kubenswrapper[4874]: I0122 12:26:00.688001 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdztk" event={"ID":"b6dd15bf-729b-415a-835f-20a2d777fec0","Type":"ContainerStarted","Data":"22314f87718ae2d216abf1d85d2d6624b190fa8642a36d93ae61ae2b86956b68"} Jan 22 12:26:01 crc kubenswrapper[4874]: I0122 12:26:01.699961 4874 generic.go:334] "Generic (PLEG): container finished" podID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerID="22314f87718ae2d216abf1d85d2d6624b190fa8642a36d93ae61ae2b86956b68" exitCode=0 Jan 22 12:26:01 crc kubenswrapper[4874]: I0122 12:26:01.700031 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdztk" event={"ID":"b6dd15bf-729b-415a-835f-20a2d777fec0","Type":"ContainerDied","Data":"22314f87718ae2d216abf1d85d2d6624b190fa8642a36d93ae61ae2b86956b68"} Jan 22 12:26:02 crc kubenswrapper[4874]: I0122 12:26:02.730731 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdztk" event={"ID":"b6dd15bf-729b-415a-835f-20a2d777fec0","Type":"ContainerStarted","Data":"d95550138a7512b811fbb08caadebf233ff8761506de6ed159f6c97c657a2cb4"} Jan 22 12:26:02 crc kubenswrapper[4874]: I0122 12:26:02.745956 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cdztk" podStartSLOduration=3.098073188 podStartE2EDuration="5.745936145s" podCreationTimestamp="2026-01-22 12:25:57 +0000 UTC" firstStartedPulling="2026-01-22 12:25:59.679882421 +0000 UTC m=+2733.524953491" lastFinishedPulling="2026-01-22 12:26:02.327745338 +0000 UTC m=+2736.172816448" observedRunningTime="2026-01-22 12:26:02.736536505 +0000 UTC m=+2736.581607585" watchObservedRunningTime="2026-01-22 12:26:02.745936145 +0000 UTC m=+2736.591007225" Jan 22 12:26:08 crc kubenswrapper[4874]: I0122 12:26:08.239600 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:26:08 crc kubenswrapper[4874]: I0122 12:26:08.240689 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:26:09 crc kubenswrapper[4874]: I0122 12:26:09.290239 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdztk" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerName="registry-server" probeResult="failure" output=< Jan 22 12:26:09 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Jan 22 12:26:09 crc kubenswrapper[4874]: > Jan 22 12:26:13 crc kubenswrapper[4874]: I0122 12:26:13.520974 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:26:13 crc kubenswrapper[4874]: I0122 12:26:13.521387 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:26:18 crc kubenswrapper[4874]: I0122 12:26:18.283621 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:26:18 crc kubenswrapper[4874]: I0122 12:26:18.343379 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:26:19 crc kubenswrapper[4874]: I0122 12:26:19.801763 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdztk"] Jan 22 12:26:19 crc kubenswrapper[4874]: I0122 12:26:19.862375 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cdztk" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerName="registry-server" containerID="cri-o://d95550138a7512b811fbb08caadebf233ff8761506de6ed159f6c97c657a2cb4" gracePeriod=2 Jan 22 12:26:20 crc kubenswrapper[4874]: I0122 12:26:20.877446 4874 generic.go:334] "Generic (PLEG): container finished" podID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerID="d95550138a7512b811fbb08caadebf233ff8761506de6ed159f6c97c657a2cb4" exitCode=0 Jan 22 12:26:20 crc kubenswrapper[4874]: I0122 12:26:20.877498 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdztk" event={"ID":"b6dd15bf-729b-415a-835f-20a2d777fec0","Type":"ContainerDied","Data":"d95550138a7512b811fbb08caadebf233ff8761506de6ed159f6c97c657a2cb4"} Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.379894 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.535120 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-catalog-content\") pod \"b6dd15bf-729b-415a-835f-20a2d777fec0\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.535182 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxrrm\" (UniqueName: \"kubernetes.io/projected/b6dd15bf-729b-415a-835f-20a2d777fec0-kube-api-access-dxrrm\") pod \"b6dd15bf-729b-415a-835f-20a2d777fec0\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.535263 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-utilities\") pod \"b6dd15bf-729b-415a-835f-20a2d777fec0\" (UID: \"b6dd15bf-729b-415a-835f-20a2d777fec0\") " Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.536428 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-utilities" (OuterVolumeSpecName: "utilities") pod "b6dd15bf-729b-415a-835f-20a2d777fec0" (UID: "b6dd15bf-729b-415a-835f-20a2d777fec0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.552890 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6dd15bf-729b-415a-835f-20a2d777fec0-kube-api-access-dxrrm" (OuterVolumeSpecName: "kube-api-access-dxrrm") pod "b6dd15bf-729b-415a-835f-20a2d777fec0" (UID: "b6dd15bf-729b-415a-835f-20a2d777fec0"). InnerVolumeSpecName "kube-api-access-dxrrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.637092 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxrrm\" (UniqueName: \"kubernetes.io/projected/b6dd15bf-729b-415a-835f-20a2d777fec0-kube-api-access-dxrrm\") on node \"crc\" DevicePath \"\"" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.637141 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.685533 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6dd15bf-729b-415a-835f-20a2d777fec0" (UID: "b6dd15bf-729b-415a-835f-20a2d777fec0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.740547 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dd15bf-729b-415a-835f-20a2d777fec0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.888374 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdztk" event={"ID":"b6dd15bf-729b-415a-835f-20a2d777fec0","Type":"ContainerDied","Data":"03ec6cb9b7fadd34d1fe75156c0aa5d424f939eecdfd48ef6b48f534c40ec2e0"} Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.888453 4874 scope.go:117] "RemoveContainer" containerID="d95550138a7512b811fbb08caadebf233ff8761506de6ed159f6c97c657a2cb4" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.888495 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdztk" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.913696 4874 scope.go:117] "RemoveContainer" containerID="22314f87718ae2d216abf1d85d2d6624b190fa8642a36d93ae61ae2b86956b68" Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.935770 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdztk"] Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.947265 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cdztk"] Jan 22 12:26:21 crc kubenswrapper[4874]: I0122 12:26:21.951697 4874 scope.go:117] "RemoveContainer" containerID="3256cbd3318e852d8cc4290f2bb70ae12b509bf1d1903df37059cd9dca3f3ac5" Jan 22 12:26:22 crc kubenswrapper[4874]: I0122 12:26:22.726078 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" path="/var/lib/kubelet/pods/b6dd15bf-729b-415a-835f-20a2d777fec0/volumes" Jan 22 12:26:43 crc kubenswrapper[4874]: I0122 12:26:43.520016 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:26:43 crc kubenswrapper[4874]: I0122 12:26:43.520531 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:26:43 crc kubenswrapper[4874]: I0122 12:26:43.520581 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:26:43 crc kubenswrapper[4874]: I0122 12:26:43.521060 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63365a511b966484f39bb84f505e4f8a844bd95a40ffac8357b30eaa5bacd904"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:26:43 crc kubenswrapper[4874]: I0122 12:26:43.521113 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://63365a511b966484f39bb84f505e4f8a844bd95a40ffac8357b30eaa5bacd904" gracePeriod=600 Jan 22 12:26:44 crc kubenswrapper[4874]: I0122 12:26:44.047452 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="63365a511b966484f39bb84f505e4f8a844bd95a40ffac8357b30eaa5bacd904" exitCode=0 Jan 22 12:26:44 crc kubenswrapper[4874]: I0122 12:26:44.047605 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"63365a511b966484f39bb84f505e4f8a844bd95a40ffac8357b30eaa5bacd904"} Jan 22 12:26:44 crc kubenswrapper[4874]: I0122 12:26:44.047831 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7"} Jan 22 12:26:44 crc kubenswrapper[4874]: I0122 12:26:44.047851 4874 scope.go:117] "RemoveContainer" containerID="b29ca28e4593590c27126c3de6633dd0a7e93ba84ad80eaa6d8fb29df1080646" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.073758 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t99tf"] Jan 22 12:27:03 crc kubenswrapper[4874]: E0122 12:27:03.076374 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerName="extract-content" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.076424 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerName="extract-content" Jan 22 12:27:03 crc kubenswrapper[4874]: E0122 12:27:03.076472 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerName="extract-utilities" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.076485 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerName="extract-utilities" Jan 22 12:27:03 crc kubenswrapper[4874]: E0122 12:27:03.076508 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerName="registry-server" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.076520 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerName="registry-server" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.076718 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6dd15bf-729b-415a-835f-20a2d777fec0" containerName="registry-server" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.078269 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.086416 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t99tf"] Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.246291 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xt6\" (UniqueName: \"kubernetes.io/projected/fda9be7d-c4d8-474a-956a-20de22b93a47-kube-api-access-q6xt6\") pod \"community-operators-t99tf\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.246383 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-utilities\") pod \"community-operators-t99tf\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.246453 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-catalog-content\") pod \"community-operators-t99tf\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.347827 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-utilities\") pod \"community-operators-t99tf\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.347897 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-catalog-content\") pod \"community-operators-t99tf\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.347985 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6xt6\" (UniqueName: \"kubernetes.io/projected/fda9be7d-c4d8-474a-956a-20de22b93a47-kube-api-access-q6xt6\") pod \"community-operators-t99tf\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.348494 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-utilities\") pod \"community-operators-t99tf\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.348551 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-catalog-content\") pod \"community-operators-t99tf\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.367329 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6xt6\" (UniqueName: \"kubernetes.io/projected/fda9be7d-c4d8-474a-956a-20de22b93a47-kube-api-access-q6xt6\") pod \"community-operators-t99tf\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.453780 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:03 crc kubenswrapper[4874]: I0122 12:27:03.968886 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t99tf"] Jan 22 12:27:04 crc kubenswrapper[4874]: I0122 12:27:04.221055 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t99tf" event={"ID":"fda9be7d-c4d8-474a-956a-20de22b93a47","Type":"ContainerDied","Data":"db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480"} Jan 22 12:27:04 crc kubenswrapper[4874]: I0122 12:27:04.220814 4874 generic.go:334] "Generic (PLEG): container finished" podID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerID="db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480" exitCode=0 Jan 22 12:27:04 crc kubenswrapper[4874]: I0122 12:27:04.222238 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t99tf" event={"ID":"fda9be7d-c4d8-474a-956a-20de22b93a47","Type":"ContainerStarted","Data":"607e726b85edcce4faa88a5cb68ab4a274cf859ac65ee4cb189008767f474219"} Jan 22 12:27:05 crc kubenswrapper[4874]: I0122 12:27:05.234798 4874 generic.go:334] "Generic (PLEG): container finished" podID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerID="bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782" exitCode=0 Jan 22 12:27:05 crc kubenswrapper[4874]: I0122 12:27:05.234944 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t99tf" event={"ID":"fda9be7d-c4d8-474a-956a-20de22b93a47","Type":"ContainerDied","Data":"bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782"} Jan 22 12:27:06 crc kubenswrapper[4874]: I0122 12:27:06.254071 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t99tf" event={"ID":"fda9be7d-c4d8-474a-956a-20de22b93a47","Type":"ContainerStarted","Data":"78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72"} Jan 22 12:27:13 crc kubenswrapper[4874]: I0122 12:27:13.454730 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:13 crc kubenswrapper[4874]: I0122 12:27:13.456785 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:13 crc kubenswrapper[4874]: I0122 12:27:13.529610 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:13 crc kubenswrapper[4874]: I0122 12:27:13.559588 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t99tf" podStartSLOduration=9.098910049 podStartE2EDuration="10.559572866s" podCreationTimestamp="2026-01-22 12:27:03 +0000 UTC" firstStartedPulling="2026-01-22 12:27:04.222857734 +0000 UTC m=+2798.067928844" lastFinishedPulling="2026-01-22 12:27:05.683520551 +0000 UTC m=+2799.528591661" observedRunningTime="2026-01-22 12:27:06.290921733 +0000 UTC m=+2800.135992813" watchObservedRunningTime="2026-01-22 12:27:13.559572866 +0000 UTC m=+2807.404643936" Jan 22 12:27:14 crc kubenswrapper[4874]: I0122 12:27:14.364974 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:14 crc kubenswrapper[4874]: I0122 12:27:14.415102 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t99tf"] Jan 22 12:27:16 crc kubenswrapper[4874]: I0122 12:27:16.342554 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t99tf" podUID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerName="registry-server" containerID="cri-o://78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72" gracePeriod=2 Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.281604 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.350804 4874 generic.go:334] "Generic (PLEG): container finished" podID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerID="78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72" exitCode=0 Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.350850 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t99tf" event={"ID":"fda9be7d-c4d8-474a-956a-20de22b93a47","Type":"ContainerDied","Data":"78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72"} Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.350866 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t99tf" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.350878 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t99tf" event={"ID":"fda9be7d-c4d8-474a-956a-20de22b93a47","Type":"ContainerDied","Data":"607e726b85edcce4faa88a5cb68ab4a274cf859ac65ee4cb189008767f474219"} Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.350896 4874 scope.go:117] "RemoveContainer" containerID="78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.366993 4874 scope.go:117] "RemoveContainer" containerID="bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.380226 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6xt6\" (UniqueName: \"kubernetes.io/projected/fda9be7d-c4d8-474a-956a-20de22b93a47-kube-api-access-q6xt6\") pod \"fda9be7d-c4d8-474a-956a-20de22b93a47\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.380279 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-catalog-content\") pod \"fda9be7d-c4d8-474a-956a-20de22b93a47\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.380321 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-utilities\") pod \"fda9be7d-c4d8-474a-956a-20de22b93a47\" (UID: \"fda9be7d-c4d8-474a-956a-20de22b93a47\") " Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.381679 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-utilities" (OuterVolumeSpecName: "utilities") pod "fda9be7d-c4d8-474a-956a-20de22b93a47" (UID: "fda9be7d-c4d8-474a-956a-20de22b93a47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.385772 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda9be7d-c4d8-474a-956a-20de22b93a47-kube-api-access-q6xt6" (OuterVolumeSpecName: "kube-api-access-q6xt6") pod "fda9be7d-c4d8-474a-956a-20de22b93a47" (UID: "fda9be7d-c4d8-474a-956a-20de22b93a47"). InnerVolumeSpecName "kube-api-access-q6xt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.391985 4874 scope.go:117] "RemoveContainer" containerID="db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.431646 4874 scope.go:117] "RemoveContainer" containerID="78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72" Jan 22 12:27:17 crc kubenswrapper[4874]: E0122 12:27:17.432080 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72\": container with ID starting with 78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72 not found: ID does not exist" containerID="78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.432118 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72"} err="failed to get container status \"78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72\": rpc error: code = NotFound desc = could not find container \"78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72\": container with ID starting with 78cf9858463bcb9cda28f3ce4442e4717821d4e2b6fe395ff42951574324ae72 not found: ID does not exist" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.432142 4874 scope.go:117] "RemoveContainer" containerID="bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782" Jan 22 12:27:17 crc kubenswrapper[4874]: E0122 12:27:17.432653 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782\": container with ID starting with bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782 not found: ID does not exist" containerID="bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.432676 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782"} err="failed to get container status \"bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782\": rpc error: code = NotFound desc = could not find container \"bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782\": container with ID starting with bce84a4d49a705a423820b0b8e7f8f9955c5910e4f4352f68f63e8fa8aaf2782 not found: ID does not exist" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.432693 4874 scope.go:117] "RemoveContainer" containerID="db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480" Jan 22 12:27:17 crc kubenswrapper[4874]: E0122 12:27:17.432988 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480\": container with ID starting with db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480 not found: ID does not exist" containerID="db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.433009 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480"} err="failed to get container status \"db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480\": rpc error: code = NotFound desc = could not find container \"db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480\": container with ID starting with db3c128e689667f735cf9e0a6ed9f2088a04522ca182fa45cdfc975c925fa480 not found: ID does not exist" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.438335 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fda9be7d-c4d8-474a-956a-20de22b93a47" (UID: "fda9be7d-c4d8-474a-956a-20de22b93a47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.481736 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6xt6\" (UniqueName: \"kubernetes.io/projected/fda9be7d-c4d8-474a-956a-20de22b93a47-kube-api-access-q6xt6\") on node \"crc\" DevicePath \"\"" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.481771 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.481784 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda9be7d-c4d8-474a-956a-20de22b93a47-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.700417 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t99tf"] Jan 22 12:27:17 crc kubenswrapper[4874]: I0122 12:27:17.708907 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t99tf"] Jan 22 12:27:18 crc kubenswrapper[4874]: I0122 12:27:18.743604 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda9be7d-c4d8-474a-956a-20de22b93a47" path="/var/lib/kubelet/pods/fda9be7d-c4d8-474a-956a-20de22b93a47/volumes" Jan 22 12:28:43 crc kubenswrapper[4874]: I0122 12:28:43.520764 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:28:43 crc kubenswrapper[4874]: I0122 12:28:43.521430 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:29:13 crc kubenswrapper[4874]: I0122 12:29:13.520919 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:29:13 crc kubenswrapper[4874]: I0122 12:29:13.521595 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:29:43 crc kubenswrapper[4874]: I0122 12:29:43.520846 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:29:43 crc kubenswrapper[4874]: I0122 12:29:43.521498 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:29:43 crc kubenswrapper[4874]: I0122 12:29:43.521574 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:29:43 crc kubenswrapper[4874]: I0122 12:29:43.522587 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:29:43 crc kubenswrapper[4874]: I0122 12:29:43.522707 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" gracePeriod=600 Jan 22 12:29:43 crc kubenswrapper[4874]: E0122 12:29:43.669149 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:29:43 crc kubenswrapper[4874]: I0122 12:29:43.675026 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" exitCode=0 Jan 22 12:29:43 crc kubenswrapper[4874]: I0122 12:29:43.675100 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7"} Jan 22 12:29:43 crc kubenswrapper[4874]: I0122 12:29:43.675154 4874 scope.go:117] "RemoveContainer" containerID="63365a511b966484f39bb84f505e4f8a844bd95a40ffac8357b30eaa5bacd904" Jan 22 12:29:43 crc kubenswrapper[4874]: I0122 12:29:43.675980 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:29:43 crc kubenswrapper[4874]: E0122 12:29:43.677156 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:29:56 crc kubenswrapper[4874]: I0122 12:29:56.724273 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:29:56 crc kubenswrapper[4874]: E0122 12:29:56.725326 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.158722 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7"] Jan 22 12:30:00 crc kubenswrapper[4874]: E0122 12:30:00.159499 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerName="registry-server" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.159513 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerName="registry-server" Jan 22 12:30:00 crc kubenswrapper[4874]: E0122 12:30:00.159531 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerName="extract-utilities" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.159540 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerName="extract-utilities" Jan 22 12:30:00 crc kubenswrapper[4874]: E0122 12:30:00.159560 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerName="extract-content" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.159568 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerName="extract-content" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.159731 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda9be7d-c4d8-474a-956a-20de22b93a47" containerName="registry-server" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.160300 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.162747 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.167898 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.173101 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7"] Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.318341 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a6256a-adf8-4648-bc08-825f3d89a607-config-volume\") pod \"collect-profiles-29484750-cfdm7\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.318541 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7z5\" (UniqueName: \"kubernetes.io/projected/b9a6256a-adf8-4648-bc08-825f3d89a607-kube-api-access-9w7z5\") pod \"collect-profiles-29484750-cfdm7\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.318587 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a6256a-adf8-4648-bc08-825f3d89a607-secret-volume\") pod \"collect-profiles-29484750-cfdm7\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.419841 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7z5\" (UniqueName: \"kubernetes.io/projected/b9a6256a-adf8-4648-bc08-825f3d89a607-kube-api-access-9w7z5\") pod \"collect-profiles-29484750-cfdm7\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.419934 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a6256a-adf8-4648-bc08-825f3d89a607-secret-volume\") pod \"collect-profiles-29484750-cfdm7\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.420036 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a6256a-adf8-4648-bc08-825f3d89a607-config-volume\") pod \"collect-profiles-29484750-cfdm7\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.421987 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a6256a-adf8-4648-bc08-825f3d89a607-config-volume\") pod \"collect-profiles-29484750-cfdm7\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.430547 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a6256a-adf8-4648-bc08-825f3d89a607-secret-volume\") pod \"collect-profiles-29484750-cfdm7\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.444559 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7z5\" (UniqueName: \"kubernetes.io/projected/b9a6256a-adf8-4648-bc08-825f3d89a607-kube-api-access-9w7z5\") pod \"collect-profiles-29484750-cfdm7\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.507733 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:00 crc kubenswrapper[4874]: I0122 12:30:00.745525 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7"] Jan 22 12:30:01 crc kubenswrapper[4874]: I0122 12:30:01.156241 4874 generic.go:334] "Generic (PLEG): container finished" podID="b9a6256a-adf8-4648-bc08-825f3d89a607" containerID="d1dd2178451280b215a757384dff18acd9ee41c4da8e5236e7ad4dc659c2536a" exitCode=0 Jan 22 12:30:01 crc kubenswrapper[4874]: I0122 12:30:01.156361 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" event={"ID":"b9a6256a-adf8-4648-bc08-825f3d89a607","Type":"ContainerDied","Data":"d1dd2178451280b215a757384dff18acd9ee41c4da8e5236e7ad4dc659c2536a"} Jan 22 12:30:01 crc kubenswrapper[4874]: I0122 12:30:01.156592 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" event={"ID":"b9a6256a-adf8-4648-bc08-825f3d89a607","Type":"ContainerStarted","Data":"0edd124508f1e34abd9dac5ed4a94b67e8e7f638376944f4ed579e96dae37a7e"} Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.499731 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.559908 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w7z5\" (UniqueName: \"kubernetes.io/projected/b9a6256a-adf8-4648-bc08-825f3d89a607-kube-api-access-9w7z5\") pod \"b9a6256a-adf8-4648-bc08-825f3d89a607\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.559987 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a6256a-adf8-4648-bc08-825f3d89a607-secret-volume\") pod \"b9a6256a-adf8-4648-bc08-825f3d89a607\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.560029 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a6256a-adf8-4648-bc08-825f3d89a607-config-volume\") pod \"b9a6256a-adf8-4648-bc08-825f3d89a607\" (UID: \"b9a6256a-adf8-4648-bc08-825f3d89a607\") " Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.560697 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9a6256a-adf8-4648-bc08-825f3d89a607-config-volume" (OuterVolumeSpecName: "config-volume") pod "b9a6256a-adf8-4648-bc08-825f3d89a607" (UID: "b9a6256a-adf8-4648-bc08-825f3d89a607"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.565569 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9a6256a-adf8-4648-bc08-825f3d89a607-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b9a6256a-adf8-4648-bc08-825f3d89a607" (UID: "b9a6256a-adf8-4648-bc08-825f3d89a607"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.565997 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9a6256a-adf8-4648-bc08-825f3d89a607-kube-api-access-9w7z5" (OuterVolumeSpecName: "kube-api-access-9w7z5") pod "b9a6256a-adf8-4648-bc08-825f3d89a607" (UID: "b9a6256a-adf8-4648-bc08-825f3d89a607"). InnerVolumeSpecName "kube-api-access-9w7z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.661622 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a6256a-adf8-4648-bc08-825f3d89a607-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.661667 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a6256a-adf8-4648-bc08-825f3d89a607-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:02 crc kubenswrapper[4874]: I0122 12:30:02.661687 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w7z5\" (UniqueName: \"kubernetes.io/projected/b9a6256a-adf8-4648-bc08-825f3d89a607-kube-api-access-9w7z5\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:03 crc kubenswrapper[4874]: I0122 12:30:03.176926 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" event={"ID":"b9a6256a-adf8-4648-bc08-825f3d89a607","Type":"ContainerDied","Data":"0edd124508f1e34abd9dac5ed4a94b67e8e7f638376944f4ed579e96dae37a7e"} Jan 22 12:30:03 crc kubenswrapper[4874]: I0122 12:30:03.177717 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0edd124508f1e34abd9dac5ed4a94b67e8e7f638376944f4ed579e96dae37a7e" Jan 22 12:30:03 crc kubenswrapper[4874]: I0122 12:30:03.177059 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-cfdm7" Jan 22 12:30:03 crc kubenswrapper[4874]: I0122 12:30:03.579154 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn"] Jan 22 12:30:03 crc kubenswrapper[4874]: I0122 12:30:03.590583 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-xlhwn"] Jan 22 12:30:04 crc kubenswrapper[4874]: I0122 12:30:04.725755 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce45c6ff-601e-4b12-97b6-4737304db2d7" path="/var/lib/kubelet/pods/ce45c6ff-601e-4b12-97b6-4737304db2d7/volumes" Jan 22 12:30:09 crc kubenswrapper[4874]: I0122 12:30:09.715836 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:30:09 crc kubenswrapper[4874]: E0122 12:30:09.716480 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:30:23 crc kubenswrapper[4874]: I0122 12:30:23.716177 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:30:23 crc kubenswrapper[4874]: E0122 12:30:23.717331 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:30:33 crc kubenswrapper[4874]: I0122 12:30:33.724969 4874 scope.go:117] "RemoveContainer" containerID="e4f9ca4f65548ceb4a0c2ab28cc0dcb70d2bfed6abf0e18366dac77436253610" Jan 22 12:30:38 crc kubenswrapper[4874]: I0122 12:30:38.717226 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:30:38 crc kubenswrapper[4874]: E0122 12:30:38.718073 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:30:51 crc kubenswrapper[4874]: I0122 12:30:51.716127 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:30:51 crc kubenswrapper[4874]: E0122 12:30:51.716751 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:31:04 crc kubenswrapper[4874]: I0122 12:31:04.715762 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:31:04 crc kubenswrapper[4874]: E0122 12:31:04.716686 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:31:17 crc kubenswrapper[4874]: I0122 12:31:17.716204 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:31:17 crc kubenswrapper[4874]: E0122 12:31:17.716992 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:31:31 crc kubenswrapper[4874]: I0122 12:31:31.716855 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:31:31 crc kubenswrapper[4874]: E0122 12:31:31.717824 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:31:42 crc kubenswrapper[4874]: I0122 12:31:42.715886 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:31:42 crc kubenswrapper[4874]: E0122 12:31:42.716701 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:31:53 crc kubenswrapper[4874]: I0122 12:31:53.718363 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:31:53 crc kubenswrapper[4874]: E0122 12:31:53.719228 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:32:06 crc kubenswrapper[4874]: I0122 12:32:06.731929 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:32:06 crc kubenswrapper[4874]: E0122 12:32:06.733439 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:32:17 crc kubenswrapper[4874]: I0122 12:32:17.717961 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:32:17 crc kubenswrapper[4874]: E0122 12:32:17.720152 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:32:32 crc kubenswrapper[4874]: I0122 12:32:32.718714 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:32:32 crc kubenswrapper[4874]: E0122 12:32:32.721049 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:32:45 crc kubenswrapper[4874]: I0122 12:32:45.717021 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:32:45 crc kubenswrapper[4874]: E0122 12:32:45.717861 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:32:57 crc kubenswrapper[4874]: I0122 12:32:57.715999 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:32:57 crc kubenswrapper[4874]: E0122 12:32:57.717072 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:33:08 crc kubenswrapper[4874]: I0122 12:33:08.723968 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:33:08 crc kubenswrapper[4874]: E0122 12:33:08.725163 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:33:19 crc kubenswrapper[4874]: I0122 12:33:19.716363 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:33:19 crc kubenswrapper[4874]: E0122 12:33:19.717325 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:33:31 crc kubenswrapper[4874]: I0122 12:33:31.715692 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:33:31 crc kubenswrapper[4874]: E0122 12:33:31.716636 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.020914 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dxtkp"] Jan 22 12:33:41 crc kubenswrapper[4874]: E0122 12:33:41.021790 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9a6256a-adf8-4648-bc08-825f3d89a607" containerName="collect-profiles" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.021808 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9a6256a-adf8-4648-bc08-825f3d89a607" containerName="collect-profiles" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.021993 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9a6256a-adf8-4648-bc08-825f3d89a607" containerName="collect-profiles" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.029452 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.046091 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxtkp"] Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.136525 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-utilities\") pod \"certified-operators-dxtkp\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.136598 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-catalog-content\") pod \"certified-operators-dxtkp\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.136630 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv8s9\" (UniqueName: \"kubernetes.io/projected/d4178041-14ad-4312-8200-0bc023576da8-kube-api-access-rv8s9\") pod \"certified-operators-dxtkp\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.238029 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-utilities\") pod \"certified-operators-dxtkp\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.238098 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-catalog-content\") pod \"certified-operators-dxtkp\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.238123 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv8s9\" (UniqueName: \"kubernetes.io/projected/d4178041-14ad-4312-8200-0bc023576da8-kube-api-access-rv8s9\") pod \"certified-operators-dxtkp\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.238672 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-utilities\") pod \"certified-operators-dxtkp\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.238749 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-catalog-content\") pod \"certified-operators-dxtkp\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.267133 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv8s9\" (UniqueName: \"kubernetes.io/projected/d4178041-14ad-4312-8200-0bc023576da8-kube-api-access-rv8s9\") pod \"certified-operators-dxtkp\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.353314 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:41 crc kubenswrapper[4874]: I0122 12:33:41.627567 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxtkp"] Jan 22 12:33:42 crc kubenswrapper[4874]: I0122 12:33:42.180250 4874 generic.go:334] "Generic (PLEG): container finished" podID="d4178041-14ad-4312-8200-0bc023576da8" containerID="03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a" exitCode=0 Jan 22 12:33:42 crc kubenswrapper[4874]: I0122 12:33:42.180355 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxtkp" event={"ID":"d4178041-14ad-4312-8200-0bc023576da8","Type":"ContainerDied","Data":"03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a"} Jan 22 12:33:42 crc kubenswrapper[4874]: I0122 12:33:42.180539 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxtkp" event={"ID":"d4178041-14ad-4312-8200-0bc023576da8","Type":"ContainerStarted","Data":"06511af0e67adbc96e0266bc1edddc83525d3b3fd4f5e3c4d71165cd927480e7"} Jan 22 12:33:42 crc kubenswrapper[4874]: I0122 12:33:42.182090 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:33:43 crc kubenswrapper[4874]: I0122 12:33:43.188454 4874 generic.go:334] "Generic (PLEG): container finished" podID="d4178041-14ad-4312-8200-0bc023576da8" containerID="dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21" exitCode=0 Jan 22 12:33:43 crc kubenswrapper[4874]: I0122 12:33:43.188501 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxtkp" event={"ID":"d4178041-14ad-4312-8200-0bc023576da8","Type":"ContainerDied","Data":"dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21"} Jan 22 12:33:44 crc kubenswrapper[4874]: I0122 12:33:44.197504 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxtkp" event={"ID":"d4178041-14ad-4312-8200-0bc023576da8","Type":"ContainerStarted","Data":"13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae"} Jan 22 12:33:44 crc kubenswrapper[4874]: I0122 12:33:44.218849 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dxtkp" podStartSLOduration=2.843595423 podStartE2EDuration="4.218833562s" podCreationTimestamp="2026-01-22 12:33:40 +0000 UTC" firstStartedPulling="2026-01-22 12:33:42.181896454 +0000 UTC m=+3196.026967524" lastFinishedPulling="2026-01-22 12:33:43.557134583 +0000 UTC m=+3197.402205663" observedRunningTime="2026-01-22 12:33:44.212496516 +0000 UTC m=+3198.057567586" watchObservedRunningTime="2026-01-22 12:33:44.218833562 +0000 UTC m=+3198.063904622" Jan 22 12:33:46 crc kubenswrapper[4874]: I0122 12:33:46.721196 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:33:46 crc kubenswrapper[4874]: E0122 12:33:46.722580 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:33:51 crc kubenswrapper[4874]: I0122 12:33:51.354460 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:51 crc kubenswrapper[4874]: I0122 12:33:51.354998 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:51 crc kubenswrapper[4874]: I0122 12:33:51.412630 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:52 crc kubenswrapper[4874]: I0122 12:33:52.321058 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:52 crc kubenswrapper[4874]: I0122 12:33:52.383379 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxtkp"] Jan 22 12:33:54 crc kubenswrapper[4874]: I0122 12:33:54.274159 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dxtkp" podUID="d4178041-14ad-4312-8200-0bc023576da8" containerName="registry-server" containerID="cri-o://13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae" gracePeriod=2 Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.255505 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.301970 4874 generic.go:334] "Generic (PLEG): container finished" podID="d4178041-14ad-4312-8200-0bc023576da8" containerID="13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae" exitCode=0 Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.302151 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxtkp" event={"ID":"d4178041-14ad-4312-8200-0bc023576da8","Type":"ContainerDied","Data":"13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae"} Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.302708 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxtkp" event={"ID":"d4178041-14ad-4312-8200-0bc023576da8","Type":"ContainerDied","Data":"06511af0e67adbc96e0266bc1edddc83525d3b3fd4f5e3c4d71165cd927480e7"} Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.302265 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxtkp" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.302805 4874 scope.go:117] "RemoveContainer" containerID="13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.324047 4874 scope.go:117] "RemoveContainer" containerID="dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.346971 4874 scope.go:117] "RemoveContainer" containerID="03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.354627 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv8s9\" (UniqueName: \"kubernetes.io/projected/d4178041-14ad-4312-8200-0bc023576da8-kube-api-access-rv8s9\") pod \"d4178041-14ad-4312-8200-0bc023576da8\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.354748 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-catalog-content\") pod \"d4178041-14ad-4312-8200-0bc023576da8\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.354825 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-utilities\") pod \"d4178041-14ad-4312-8200-0bc023576da8\" (UID: \"d4178041-14ad-4312-8200-0bc023576da8\") " Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.357690 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-utilities" (OuterVolumeSpecName: "utilities") pod "d4178041-14ad-4312-8200-0bc023576da8" (UID: "d4178041-14ad-4312-8200-0bc023576da8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.362644 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4178041-14ad-4312-8200-0bc023576da8-kube-api-access-rv8s9" (OuterVolumeSpecName: "kube-api-access-rv8s9") pod "d4178041-14ad-4312-8200-0bc023576da8" (UID: "d4178041-14ad-4312-8200-0bc023576da8"). InnerVolumeSpecName "kube-api-access-rv8s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.369387 4874 scope.go:117] "RemoveContainer" containerID="13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae" Jan 22 12:33:55 crc kubenswrapper[4874]: E0122 12:33:55.370140 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae\": container with ID starting with 13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae not found: ID does not exist" containerID="13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.370319 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae"} err="failed to get container status \"13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae\": rpc error: code = NotFound desc = could not find container \"13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae\": container with ID starting with 13573dc8f3c42b06e2e33121c0eb1d276a657f1c2afdd0ab2441688bb3844eae not found: ID does not exist" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.370562 4874 scope.go:117] "RemoveContainer" containerID="dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21" Jan 22 12:33:55 crc kubenswrapper[4874]: E0122 12:33:55.371110 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21\": container with ID starting with dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21 not found: ID does not exist" containerID="dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.371335 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21"} err="failed to get container status \"dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21\": rpc error: code = NotFound desc = could not find container \"dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21\": container with ID starting with dbdc0cc6b40c096de74bde2acc2ca457b7210736c0d92071699dd82d24092d21 not found: ID does not exist" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.371530 4874 scope.go:117] "RemoveContainer" containerID="03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a" Jan 22 12:33:55 crc kubenswrapper[4874]: E0122 12:33:55.372095 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a\": container with ID starting with 03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a not found: ID does not exist" containerID="03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.372284 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a"} err="failed to get container status \"03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a\": rpc error: code = NotFound desc = could not find container \"03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a\": container with ID starting with 03fa9881f901b676e4d5bfa7b35603a03b5daba1c553dfcb37bd784ddd5ae75a not found: ID does not exist" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.408454 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4178041-14ad-4312-8200-0bc023576da8" (UID: "d4178041-14ad-4312-8200-0bc023576da8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.456118 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv8s9\" (UniqueName: \"kubernetes.io/projected/d4178041-14ad-4312-8200-0bc023576da8-kube-api-access-rv8s9\") on node \"crc\" DevicePath \"\"" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.456149 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.456159 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4178041-14ad-4312-8200-0bc023576da8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.646902 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxtkp"] Jan 22 12:33:55 crc kubenswrapper[4874]: I0122 12:33:55.660348 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dxtkp"] Jan 22 12:33:56 crc kubenswrapper[4874]: I0122 12:33:56.730355 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4178041-14ad-4312-8200-0bc023576da8" path="/var/lib/kubelet/pods/d4178041-14ad-4312-8200-0bc023576da8/volumes" Jan 22 12:33:57 crc kubenswrapper[4874]: I0122 12:33:57.717774 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:33:57 crc kubenswrapper[4874]: E0122 12:33:57.718183 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:34:11 crc kubenswrapper[4874]: I0122 12:34:11.716601 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:34:11 crc kubenswrapper[4874]: E0122 12:34:11.719881 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:34:24 crc kubenswrapper[4874]: I0122 12:34:24.716676 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:34:24 crc kubenswrapper[4874]: E0122 12:34:24.717538 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:34:38 crc kubenswrapper[4874]: I0122 12:34:38.716123 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:34:38 crc kubenswrapper[4874]: E0122 12:34:38.716907 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:34:49 crc kubenswrapper[4874]: I0122 12:34:49.716390 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:34:50 crc kubenswrapper[4874]: I0122 12:34:50.861989 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"07181174f3dd294d8a33813c347d0b5e89eda4823fa5474d3fdedb3ef1a9e13a"} Jan 22 12:37:13 crc kubenswrapper[4874]: I0122 12:37:13.521010 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:37:13 crc kubenswrapper[4874]: I0122 12:37:13.521602 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:37:37 crc kubenswrapper[4874]: I0122 12:37:37.998536 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bdp5n"] Jan 22 12:37:38 crc kubenswrapper[4874]: E0122 12:37:37.999351 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4178041-14ad-4312-8200-0bc023576da8" containerName="extract-content" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:37.999367 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4178041-14ad-4312-8200-0bc023576da8" containerName="extract-content" Jan 22 12:37:38 crc kubenswrapper[4874]: E0122 12:37:37.999412 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4178041-14ad-4312-8200-0bc023576da8" containerName="extract-utilities" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:37.999423 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4178041-14ad-4312-8200-0bc023576da8" containerName="extract-utilities" Jan 22 12:37:38 crc kubenswrapper[4874]: E0122 12:37:37.999458 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4178041-14ad-4312-8200-0bc023576da8" containerName="registry-server" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:37.999465 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4178041-14ad-4312-8200-0bc023576da8" containerName="registry-server" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:37.999628 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4178041-14ad-4312-8200-0bc023576da8" containerName="registry-server" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.000697 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.017903 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdp5n"] Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.148794 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5p5c\" (UniqueName: \"kubernetes.io/projected/1fb7aed4-6e72-4661-bc08-fad9bf646202-kube-api-access-p5p5c\") pod \"community-operators-bdp5n\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.148972 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-utilities\") pod \"community-operators-bdp5n\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.149000 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-catalog-content\") pod \"community-operators-bdp5n\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.250806 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5p5c\" (UniqueName: \"kubernetes.io/projected/1fb7aed4-6e72-4661-bc08-fad9bf646202-kube-api-access-p5p5c\") pod \"community-operators-bdp5n\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.251232 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-utilities\") pod \"community-operators-bdp5n\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.251256 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-catalog-content\") pod \"community-operators-bdp5n\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.251684 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-utilities\") pod \"community-operators-bdp5n\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.251775 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-catalog-content\") pod \"community-operators-bdp5n\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.269979 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5p5c\" (UniqueName: \"kubernetes.io/projected/1fb7aed4-6e72-4661-bc08-fad9bf646202-kube-api-access-p5p5c\") pod \"community-operators-bdp5n\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.318143 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:38 crc kubenswrapper[4874]: I0122 12:37:38.607530 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdp5n"] Jan 22 12:37:39 crc kubenswrapper[4874]: I0122 12:37:39.398344 4874 generic.go:334] "Generic (PLEG): container finished" podID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerID="a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85" exitCode=0 Jan 22 12:37:39 crc kubenswrapper[4874]: I0122 12:37:39.398421 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdp5n" event={"ID":"1fb7aed4-6e72-4661-bc08-fad9bf646202","Type":"ContainerDied","Data":"a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85"} Jan 22 12:37:39 crc kubenswrapper[4874]: I0122 12:37:39.398633 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdp5n" event={"ID":"1fb7aed4-6e72-4661-bc08-fad9bf646202","Type":"ContainerStarted","Data":"d9a63872957764bb9bbe2995e7ec36fa28bf1580fa162fc26a40162b39a4f27e"} Jan 22 12:37:41 crc kubenswrapper[4874]: I0122 12:37:41.419568 4874 generic.go:334] "Generic (PLEG): container finished" podID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerID="0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8" exitCode=0 Jan 22 12:37:41 crc kubenswrapper[4874]: I0122 12:37:41.419984 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdp5n" event={"ID":"1fb7aed4-6e72-4661-bc08-fad9bf646202","Type":"ContainerDied","Data":"0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8"} Jan 22 12:37:42 crc kubenswrapper[4874]: I0122 12:37:42.430601 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdp5n" event={"ID":"1fb7aed4-6e72-4661-bc08-fad9bf646202","Type":"ContainerStarted","Data":"e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e"} Jan 22 12:37:42 crc kubenswrapper[4874]: I0122 12:37:42.461048 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bdp5n" podStartSLOduration=2.92455063 podStartE2EDuration="5.461027197s" podCreationTimestamp="2026-01-22 12:37:37 +0000 UTC" firstStartedPulling="2026-01-22 12:37:39.400428431 +0000 UTC m=+3433.245499501" lastFinishedPulling="2026-01-22 12:37:41.936904978 +0000 UTC m=+3435.781976068" observedRunningTime="2026-01-22 12:37:42.458699216 +0000 UTC m=+3436.303770296" watchObservedRunningTime="2026-01-22 12:37:42.461027197 +0000 UTC m=+3436.306098267" Jan 22 12:37:43 crc kubenswrapper[4874]: I0122 12:37:43.520449 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:37:43 crc kubenswrapper[4874]: I0122 12:37:43.520723 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:37:48 crc kubenswrapper[4874]: I0122 12:37:48.318686 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:48 crc kubenswrapper[4874]: I0122 12:37:48.319885 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:48 crc kubenswrapper[4874]: I0122 12:37:48.379541 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:48 crc kubenswrapper[4874]: I0122 12:37:48.517066 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:48 crc kubenswrapper[4874]: I0122 12:37:48.617867 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdp5n"] Jan 22 12:37:50 crc kubenswrapper[4874]: I0122 12:37:50.486538 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bdp5n" podUID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerName="registry-server" containerID="cri-o://e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e" gracePeriod=2 Jan 22 12:37:51 crc kubenswrapper[4874]: I0122 12:37:51.942068 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.065884 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5p5c\" (UniqueName: \"kubernetes.io/projected/1fb7aed4-6e72-4661-bc08-fad9bf646202-kube-api-access-p5p5c\") pod \"1fb7aed4-6e72-4661-bc08-fad9bf646202\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.066053 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-catalog-content\") pod \"1fb7aed4-6e72-4661-bc08-fad9bf646202\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.066119 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-utilities\") pod \"1fb7aed4-6e72-4661-bc08-fad9bf646202\" (UID: \"1fb7aed4-6e72-4661-bc08-fad9bf646202\") " Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.067193 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-utilities" (OuterVolumeSpecName: "utilities") pod "1fb7aed4-6e72-4661-bc08-fad9bf646202" (UID: "1fb7aed4-6e72-4661-bc08-fad9bf646202"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.070945 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb7aed4-6e72-4661-bc08-fad9bf646202-kube-api-access-p5p5c" (OuterVolumeSpecName: "kube-api-access-p5p5c") pod "1fb7aed4-6e72-4661-bc08-fad9bf646202" (UID: "1fb7aed4-6e72-4661-bc08-fad9bf646202"). InnerVolumeSpecName "kube-api-access-p5p5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.121559 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fb7aed4-6e72-4661-bc08-fad9bf646202" (UID: "1fb7aed4-6e72-4661-bc08-fad9bf646202"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.168639 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5p5c\" (UniqueName: \"kubernetes.io/projected/1fb7aed4-6e72-4661-bc08-fad9bf646202-kube-api-access-p5p5c\") on node \"crc\" DevicePath \"\"" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.168782 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.168813 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb7aed4-6e72-4661-bc08-fad9bf646202-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.510196 4874 generic.go:334] "Generic (PLEG): container finished" podID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerID="e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e" exitCode=0 Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.510243 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdp5n" event={"ID":"1fb7aed4-6e72-4661-bc08-fad9bf646202","Type":"ContainerDied","Data":"e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e"} Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.510280 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdp5n" event={"ID":"1fb7aed4-6e72-4661-bc08-fad9bf646202","Type":"ContainerDied","Data":"d9a63872957764bb9bbe2995e7ec36fa28bf1580fa162fc26a40162b39a4f27e"} Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.510302 4874 scope.go:117] "RemoveContainer" containerID="e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.510319 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdp5n" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.530078 4874 scope.go:117] "RemoveContainer" containerID="0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.554636 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdp5n"] Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.569086 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bdp5n"] Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.571472 4874 scope.go:117] "RemoveContainer" containerID="a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.592370 4874 scope.go:117] "RemoveContainer" containerID="e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e" Jan 22 12:37:52 crc kubenswrapper[4874]: E0122 12:37:52.592820 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e\": container with ID starting with e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e not found: ID does not exist" containerID="e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.592849 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e"} err="failed to get container status \"e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e\": rpc error: code = NotFound desc = could not find container \"e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e\": container with ID starting with e440d1b5a98a70fb0140d96828eeff03c8761c2df34a5714a0f47a188c26f27e not found: ID does not exist" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.592869 4874 scope.go:117] "RemoveContainer" containerID="0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8" Jan 22 12:37:52 crc kubenswrapper[4874]: E0122 12:37:52.593150 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8\": container with ID starting with 0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8 not found: ID does not exist" containerID="0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.593179 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8"} err="failed to get container status \"0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8\": rpc error: code = NotFound desc = could not find container \"0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8\": container with ID starting with 0983b99ba4d62c33ac972b4d48dce835b2a92adc61a70384ce59116f8ff747e8 not found: ID does not exist" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.593193 4874 scope.go:117] "RemoveContainer" containerID="a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85" Jan 22 12:37:52 crc kubenswrapper[4874]: E0122 12:37:52.593577 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85\": container with ID starting with a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85 not found: ID does not exist" containerID="a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.593679 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85"} err="failed to get container status \"a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85\": rpc error: code = NotFound desc = could not find container \"a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85\": container with ID starting with a3fb8abd9747c4bb400b1d4533e2a33a5f1808df7be49594ff6e9e5db4c9aa85 not found: ID does not exist" Jan 22 12:37:52 crc kubenswrapper[4874]: I0122 12:37:52.724553 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb7aed4-6e72-4661-bc08-fad9bf646202" path="/var/lib/kubelet/pods/1fb7aed4-6e72-4661-bc08-fad9bf646202/volumes" Jan 22 12:38:13 crc kubenswrapper[4874]: I0122 12:38:13.520444 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:38:13 crc kubenswrapper[4874]: I0122 12:38:13.520868 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:38:13 crc kubenswrapper[4874]: I0122 12:38:13.520907 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:38:13 crc kubenswrapper[4874]: I0122 12:38:13.521627 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07181174f3dd294d8a33813c347d0b5e89eda4823fa5474d3fdedb3ef1a9e13a"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:38:13 crc kubenswrapper[4874]: I0122 12:38:13.521670 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://07181174f3dd294d8a33813c347d0b5e89eda4823fa5474d3fdedb3ef1a9e13a" gracePeriod=600 Jan 22 12:38:13 crc kubenswrapper[4874]: I0122 12:38:13.705577 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="07181174f3dd294d8a33813c347d0b5e89eda4823fa5474d3fdedb3ef1a9e13a" exitCode=0 Jan 22 12:38:13 crc kubenswrapper[4874]: I0122 12:38:13.705632 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"07181174f3dd294d8a33813c347d0b5e89eda4823fa5474d3fdedb3ef1a9e13a"} Jan 22 12:38:13 crc kubenswrapper[4874]: I0122 12:38:13.705669 4874 scope.go:117] "RemoveContainer" containerID="01df6b3be23e0ceccc44bb801500d768b66322dd1258e6f93bd6e67bc3e1efd7" Jan 22 12:38:14 crc kubenswrapper[4874]: I0122 12:38:14.732144 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b"} Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.471169 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9n79x"] Jan 22 12:39:18 crc kubenswrapper[4874]: E0122 12:39:18.472380 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerName="registry-server" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.472433 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerName="registry-server" Jan 22 12:39:18 crc kubenswrapper[4874]: E0122 12:39:18.472456 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerName="extract-utilities" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.472468 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerName="extract-utilities" Jan 22 12:39:18 crc kubenswrapper[4874]: E0122 12:39:18.472488 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerName="extract-content" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.472500 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerName="extract-content" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.472750 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb7aed4-6e72-4661-bc08-fad9bf646202" containerName="registry-server" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.474688 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.482888 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9n79x"] Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.582275 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-utilities\") pod \"redhat-operators-9n79x\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.582349 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7gmt\" (UniqueName: \"kubernetes.io/projected/b7b37623-67a8-4f21-9152-c51710b5a5e5-kube-api-access-w7gmt\") pod \"redhat-operators-9n79x\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.582656 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-catalog-content\") pod \"redhat-operators-9n79x\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.684234 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-utilities\") pod \"redhat-operators-9n79x\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.684328 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7gmt\" (UniqueName: \"kubernetes.io/projected/b7b37623-67a8-4f21-9152-c51710b5a5e5-kube-api-access-w7gmt\") pod \"redhat-operators-9n79x\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.684445 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-catalog-content\") pod \"redhat-operators-9n79x\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.685062 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-catalog-content\") pod \"redhat-operators-9n79x\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.685204 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-utilities\") pod \"redhat-operators-9n79x\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.702500 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7gmt\" (UniqueName: \"kubernetes.io/projected/b7b37623-67a8-4f21-9152-c51710b5a5e5-kube-api-access-w7gmt\") pod \"redhat-operators-9n79x\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:18 crc kubenswrapper[4874]: I0122 12:39:18.812313 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:19 crc kubenswrapper[4874]: I0122 12:39:19.022422 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9n79x"] Jan 22 12:39:19 crc kubenswrapper[4874]: I0122 12:39:19.260150 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n79x" event={"ID":"b7b37623-67a8-4f21-9152-c51710b5a5e5","Type":"ContainerStarted","Data":"9f023910baea6188cb4dcf599c845beb6ab5eed90edccfda962087fc408ed0b0"} Jan 22 12:39:20 crc kubenswrapper[4874]: I0122 12:39:20.266420 4874 generic.go:334] "Generic (PLEG): container finished" podID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerID="5f40bc937a97917b661dbc429c0bc3e2725cafe729004e75691d3832cb5d5cff" exitCode=0 Jan 22 12:39:20 crc kubenswrapper[4874]: I0122 12:39:20.266457 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n79x" event={"ID":"b7b37623-67a8-4f21-9152-c51710b5a5e5","Type":"ContainerDied","Data":"5f40bc937a97917b661dbc429c0bc3e2725cafe729004e75691d3832cb5d5cff"} Jan 22 12:39:20 crc kubenswrapper[4874]: I0122 12:39:20.268080 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:39:22 crc kubenswrapper[4874]: I0122 12:39:22.282936 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n79x" event={"ID":"b7b37623-67a8-4f21-9152-c51710b5a5e5","Type":"ContainerStarted","Data":"a394968505723f72cc7683f448a2532ffda3dda4790de0130c838c94033d17ea"} Jan 22 12:39:23 crc kubenswrapper[4874]: I0122 12:39:23.291366 4874 generic.go:334] "Generic (PLEG): container finished" podID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerID="a394968505723f72cc7683f448a2532ffda3dda4790de0130c838c94033d17ea" exitCode=0 Jan 22 12:39:23 crc kubenswrapper[4874]: I0122 12:39:23.291420 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n79x" event={"ID":"b7b37623-67a8-4f21-9152-c51710b5a5e5","Type":"ContainerDied","Data":"a394968505723f72cc7683f448a2532ffda3dda4790de0130c838c94033d17ea"} Jan 22 12:39:26 crc kubenswrapper[4874]: I0122 12:39:26.319815 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n79x" event={"ID":"b7b37623-67a8-4f21-9152-c51710b5a5e5","Type":"ContainerStarted","Data":"4e959bd367b219aa447bff1d69bec8f685cc10afcc77b393c6a31ac1dcf1aa0d"} Jan 22 12:39:26 crc kubenswrapper[4874]: I0122 12:39:26.344167 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9n79x" podStartSLOduration=3.411750087 podStartE2EDuration="8.344148638s" podCreationTimestamp="2026-01-22 12:39:18 +0000 UTC" firstStartedPulling="2026-01-22 12:39:20.267839685 +0000 UTC m=+3534.112910755" lastFinishedPulling="2026-01-22 12:39:25.200238226 +0000 UTC m=+3539.045309306" observedRunningTime="2026-01-22 12:39:26.344063505 +0000 UTC m=+3540.189134575" watchObservedRunningTime="2026-01-22 12:39:26.344148638 +0000 UTC m=+3540.189219718" Jan 22 12:39:28 crc kubenswrapper[4874]: I0122 12:39:28.812628 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:28 crc kubenswrapper[4874]: I0122 12:39:28.813990 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:29 crc kubenswrapper[4874]: I0122 12:39:29.863202 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9n79x" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerName="registry-server" probeResult="failure" output=< Jan 22 12:39:29 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Jan 22 12:39:29 crc kubenswrapper[4874]: > Jan 22 12:39:38 crc kubenswrapper[4874]: I0122 12:39:38.871920 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:38 crc kubenswrapper[4874]: I0122 12:39:38.951101 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.114580 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9n79x"] Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.115329 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9n79x" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerName="registry-server" containerID="cri-o://4e959bd367b219aa447bff1d69bec8f685cc10afcc77b393c6a31ac1dcf1aa0d" gracePeriod=2 Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.457594 4874 generic.go:334] "Generic (PLEG): container finished" podID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerID="4e959bd367b219aa447bff1d69bec8f685cc10afcc77b393c6a31ac1dcf1aa0d" exitCode=0 Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.458038 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n79x" event={"ID":"b7b37623-67a8-4f21-9152-c51710b5a5e5","Type":"ContainerDied","Data":"4e959bd367b219aa447bff1d69bec8f685cc10afcc77b393c6a31ac1dcf1aa0d"} Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.619166 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.773029 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-catalog-content\") pod \"b7b37623-67a8-4f21-9152-c51710b5a5e5\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.773164 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7gmt\" (UniqueName: \"kubernetes.io/projected/b7b37623-67a8-4f21-9152-c51710b5a5e5-kube-api-access-w7gmt\") pod \"b7b37623-67a8-4f21-9152-c51710b5a5e5\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.773285 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-utilities\") pod \"b7b37623-67a8-4f21-9152-c51710b5a5e5\" (UID: \"b7b37623-67a8-4f21-9152-c51710b5a5e5\") " Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.775290 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-utilities" (OuterVolumeSpecName: "utilities") pod "b7b37623-67a8-4f21-9152-c51710b5a5e5" (UID: "b7b37623-67a8-4f21-9152-c51710b5a5e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.801709 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b37623-67a8-4f21-9152-c51710b5a5e5-kube-api-access-w7gmt" (OuterVolumeSpecName: "kube-api-access-w7gmt") pod "b7b37623-67a8-4f21-9152-c51710b5a5e5" (UID: "b7b37623-67a8-4f21-9152-c51710b5a5e5"). InnerVolumeSpecName "kube-api-access-w7gmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.875802 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.875851 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7gmt\" (UniqueName: \"kubernetes.io/projected/b7b37623-67a8-4f21-9152-c51710b5a5e5-kube-api-access-w7gmt\") on node \"crc\" DevicePath \"\"" Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.961829 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7b37623-67a8-4f21-9152-c51710b5a5e5" (UID: "b7b37623-67a8-4f21-9152-c51710b5a5e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:39:41 crc kubenswrapper[4874]: I0122 12:39:41.978733 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b37623-67a8-4f21-9152-c51710b5a5e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:39:42 crc kubenswrapper[4874]: I0122 12:39:42.466583 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9n79x" event={"ID":"b7b37623-67a8-4f21-9152-c51710b5a5e5","Type":"ContainerDied","Data":"9f023910baea6188cb4dcf599c845beb6ab5eed90edccfda962087fc408ed0b0"} Jan 22 12:39:42 crc kubenswrapper[4874]: I0122 12:39:42.466921 4874 scope.go:117] "RemoveContainer" containerID="4e959bd367b219aa447bff1d69bec8f685cc10afcc77b393c6a31ac1dcf1aa0d" Jan 22 12:39:42 crc kubenswrapper[4874]: I0122 12:39:42.467118 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9n79x" Jan 22 12:39:42 crc kubenswrapper[4874]: I0122 12:39:42.513632 4874 scope.go:117] "RemoveContainer" containerID="a394968505723f72cc7683f448a2532ffda3dda4790de0130c838c94033d17ea" Jan 22 12:39:42 crc kubenswrapper[4874]: I0122 12:39:42.530603 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9n79x"] Jan 22 12:39:42 crc kubenswrapper[4874]: I0122 12:39:42.538610 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9n79x"] Jan 22 12:39:42 crc kubenswrapper[4874]: I0122 12:39:42.543595 4874 scope.go:117] "RemoveContainer" containerID="5f40bc937a97917b661dbc429c0bc3e2725cafe729004e75691d3832cb5d5cff" Jan 22 12:39:42 crc kubenswrapper[4874]: I0122 12:39:42.738312 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" path="/var/lib/kubelet/pods/b7b37623-67a8-4f21-9152-c51710b5a5e5/volumes" Jan 22 12:40:13 crc kubenswrapper[4874]: I0122 12:40:13.520960 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:40:13 crc kubenswrapper[4874]: I0122 12:40:13.521949 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:40:43 crc kubenswrapper[4874]: I0122 12:40:43.521139 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:40:43 crc kubenswrapper[4874]: I0122 12:40:43.521841 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:41:13 crc kubenswrapper[4874]: I0122 12:41:13.520854 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:41:13 crc kubenswrapper[4874]: I0122 12:41:13.523294 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:41:13 crc kubenswrapper[4874]: I0122 12:41:13.523882 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:41:13 crc kubenswrapper[4874]: I0122 12:41:13.530147 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:41:13 crc kubenswrapper[4874]: I0122 12:41:13.530689 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" gracePeriod=600 Jan 22 12:41:14 crc kubenswrapper[4874]: E0122 12:41:14.163658 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:41:14 crc kubenswrapper[4874]: I0122 12:41:14.227112 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" exitCode=0 Jan 22 12:41:14 crc kubenswrapper[4874]: I0122 12:41:14.227179 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b"} Jan 22 12:41:14 crc kubenswrapper[4874]: I0122 12:41:14.227230 4874 scope.go:117] "RemoveContainer" containerID="07181174f3dd294d8a33813c347d0b5e89eda4823fa5474d3fdedb3ef1a9e13a" Jan 22 12:41:14 crc kubenswrapper[4874]: I0122 12:41:14.229201 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:41:14 crc kubenswrapper[4874]: E0122 12:41:14.229737 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:41:26 crc kubenswrapper[4874]: I0122 12:41:26.762585 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:41:26 crc kubenswrapper[4874]: E0122 12:41:26.763615 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:41:37 crc kubenswrapper[4874]: I0122 12:41:37.715985 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:41:37 crc kubenswrapper[4874]: E0122 12:41:37.717028 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:41:49 crc kubenswrapper[4874]: I0122 12:41:49.716355 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:41:49 crc kubenswrapper[4874]: E0122 12:41:49.717171 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:42:00 crc kubenswrapper[4874]: I0122 12:42:00.717919 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:42:00 crc kubenswrapper[4874]: E0122 12:42:00.721012 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:42:14 crc kubenswrapper[4874]: I0122 12:42:14.716733 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:42:14 crc kubenswrapper[4874]: E0122 12:42:14.717713 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:42:25 crc kubenswrapper[4874]: I0122 12:42:25.716278 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:42:25 crc kubenswrapper[4874]: E0122 12:42:25.717247 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:42:40 crc kubenswrapper[4874]: I0122 12:42:40.717085 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:42:40 crc kubenswrapper[4874]: E0122 12:42:40.718490 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:42:53 crc kubenswrapper[4874]: I0122 12:42:53.717022 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:42:53 crc kubenswrapper[4874]: E0122 12:42:53.717748 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:43:04 crc kubenswrapper[4874]: I0122 12:43:04.716784 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:43:04 crc kubenswrapper[4874]: E0122 12:43:04.718005 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:43:19 crc kubenswrapper[4874]: I0122 12:43:19.716040 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:43:19 crc kubenswrapper[4874]: E0122 12:43:19.716889 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:43:30 crc kubenswrapper[4874]: I0122 12:43:30.719236 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:43:30 crc kubenswrapper[4874]: E0122 12:43:30.720764 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:43:44 crc kubenswrapper[4874]: I0122 12:43:44.716872 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:43:44 crc kubenswrapper[4874]: E0122 12:43:44.717914 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:43:58 crc kubenswrapper[4874]: I0122 12:43:58.716321 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:43:58 crc kubenswrapper[4874]: E0122 12:43:58.717269 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:44:13 crc kubenswrapper[4874]: I0122 12:44:13.716596 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:44:13 crc kubenswrapper[4874]: E0122 12:44:13.717257 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:44:24 crc kubenswrapper[4874]: I0122 12:44:24.716968 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:44:24 crc kubenswrapper[4874]: E0122 12:44:24.718152 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:44:38 crc kubenswrapper[4874]: I0122 12:44:38.720996 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:44:38 crc kubenswrapper[4874]: E0122 12:44:38.721677 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.936541 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7mj86"] Jan 22 12:44:48 crc kubenswrapper[4874]: E0122 12:44:48.937615 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerName="registry-server" Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.937635 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerName="registry-server" Jan 22 12:44:48 crc kubenswrapper[4874]: E0122 12:44:48.937652 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerName="extract-content" Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.937661 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerName="extract-content" Jan 22 12:44:48 crc kubenswrapper[4874]: E0122 12:44:48.937690 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerName="extract-utilities" Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.937699 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerName="extract-utilities" Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.937853 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b37623-67a8-4f21-9152-c51710b5a5e5" containerName="registry-server" Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.939126 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.944903 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7mj86"] Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.975179 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-catalog-content\") pod \"certified-operators-7mj86\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.975231 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-utilities\") pod \"certified-operators-7mj86\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:48 crc kubenswrapper[4874]: I0122 12:44:48.975418 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v87g\" (UniqueName: \"kubernetes.io/projected/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-kube-api-access-7v87g\") pod \"certified-operators-7mj86\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:49 crc kubenswrapper[4874]: I0122 12:44:49.076369 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-catalog-content\") pod \"certified-operators-7mj86\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:49 crc kubenswrapper[4874]: I0122 12:44:49.076432 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-utilities\") pod \"certified-operators-7mj86\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:49 crc kubenswrapper[4874]: I0122 12:44:49.076555 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v87g\" (UniqueName: \"kubernetes.io/projected/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-kube-api-access-7v87g\") pod \"certified-operators-7mj86\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:49 crc kubenswrapper[4874]: I0122 12:44:49.076914 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-utilities\") pod \"certified-operators-7mj86\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:49 crc kubenswrapper[4874]: I0122 12:44:49.077145 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-catalog-content\") pod \"certified-operators-7mj86\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:49 crc kubenswrapper[4874]: I0122 12:44:49.691557 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v87g\" (UniqueName: \"kubernetes.io/projected/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-kube-api-access-7v87g\") pod \"certified-operators-7mj86\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:49 crc kubenswrapper[4874]: I0122 12:44:49.868428 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:50 crc kubenswrapper[4874]: I0122 12:44:50.096561 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7mj86"] Jan 22 12:44:50 crc kubenswrapper[4874]: I0122 12:44:50.196661 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mj86" event={"ID":"402518c1-cf9f-4e51-9e46-2c7d1527ce6d","Type":"ContainerStarted","Data":"8ae86083e447aa6ec881faf0e570a3a06ed57d31d32d7fb452cb02a25568d862"} Jan 22 12:44:50 crc kubenswrapper[4874]: I0122 12:44:50.717036 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:44:50 crc kubenswrapper[4874]: E0122 12:44:50.717815 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:44:51 crc kubenswrapper[4874]: I0122 12:44:51.208035 4874 generic.go:334] "Generic (PLEG): container finished" podID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerID="468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19" exitCode=0 Jan 22 12:44:51 crc kubenswrapper[4874]: I0122 12:44:51.208100 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mj86" event={"ID":"402518c1-cf9f-4e51-9e46-2c7d1527ce6d","Type":"ContainerDied","Data":"468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19"} Jan 22 12:44:51 crc kubenswrapper[4874]: I0122 12:44:51.211069 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:44:52 crc kubenswrapper[4874]: I0122 12:44:52.218442 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mj86" event={"ID":"402518c1-cf9f-4e51-9e46-2c7d1527ce6d","Type":"ContainerStarted","Data":"ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8"} Jan 22 12:44:53 crc kubenswrapper[4874]: I0122 12:44:53.225003 4874 generic.go:334] "Generic (PLEG): container finished" podID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerID="ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8" exitCode=0 Jan 22 12:44:53 crc kubenswrapper[4874]: I0122 12:44:53.225055 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mj86" event={"ID":"402518c1-cf9f-4e51-9e46-2c7d1527ce6d","Type":"ContainerDied","Data":"ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8"} Jan 22 12:44:54 crc kubenswrapper[4874]: I0122 12:44:54.236690 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mj86" event={"ID":"402518c1-cf9f-4e51-9e46-2c7d1527ce6d","Type":"ContainerStarted","Data":"b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443"} Jan 22 12:44:54 crc kubenswrapper[4874]: I0122 12:44:54.267783 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7mj86" podStartSLOduration=3.835496477 podStartE2EDuration="6.267768058s" podCreationTimestamp="2026-01-22 12:44:48 +0000 UTC" firstStartedPulling="2026-01-22 12:44:51.210701639 +0000 UTC m=+3865.055772739" lastFinishedPulling="2026-01-22 12:44:53.64297325 +0000 UTC m=+3867.488044320" observedRunningTime="2026-01-22 12:44:54.263808045 +0000 UTC m=+3868.108879115" watchObservedRunningTime="2026-01-22 12:44:54.267768058 +0000 UTC m=+3868.112839128" Jan 22 12:44:59 crc kubenswrapper[4874]: I0122 12:44:59.869135 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:59 crc kubenswrapper[4874]: I0122 12:44:59.869876 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:44:59 crc kubenswrapper[4874]: I0122 12:44:59.919905 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.181628 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl"] Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.182386 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.185692 4874 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.185885 4874 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.201323 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl"] Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.232422 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fql5q\" (UniqueName: \"kubernetes.io/projected/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-kube-api-access-fql5q\") pod \"collect-profiles-29484765-kr6bl\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.232553 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-secret-volume\") pod \"collect-profiles-29484765-kr6bl\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.232607 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-config-volume\") pod \"collect-profiles-29484765-kr6bl\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.334553 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fql5q\" (UniqueName: \"kubernetes.io/projected/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-kube-api-access-fql5q\") pod \"collect-profiles-29484765-kr6bl\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.334851 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-secret-volume\") pod \"collect-profiles-29484765-kr6bl\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.335577 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-config-volume\") pod \"collect-profiles-29484765-kr6bl\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.336207 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-config-volume\") pod \"collect-profiles-29484765-kr6bl\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.336716 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.341413 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-secret-volume\") pod \"collect-profiles-29484765-kr6bl\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.365140 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fql5q\" (UniqueName: \"kubernetes.io/projected/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-kube-api-access-fql5q\") pod \"collect-profiles-29484765-kr6bl\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.391773 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7mj86"] Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.500031 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:00 crc kubenswrapper[4874]: I0122 12:45:00.701941 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl"] Jan 22 12:45:01 crc kubenswrapper[4874]: I0122 12:45:01.317953 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" event={"ID":"d729e7c6-4664-47cb-b4bd-9892e1cf5aec","Type":"ContainerStarted","Data":"d454170fcee92be58faf76410af524e6d12e151305a1d5537c188dd81c65eef4"} Jan 22 12:45:01 crc kubenswrapper[4874]: I0122 12:45:01.318103 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" event={"ID":"d729e7c6-4664-47cb-b4bd-9892e1cf5aec","Type":"ContainerStarted","Data":"909e7447c9e1e506cd07baba43a99e331ccc847b174d0ffcaffa649c2276d6a9"} Jan 22 12:45:01 crc kubenswrapper[4874]: I0122 12:45:01.339856 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" podStartSLOduration=1.3398280439999999 podStartE2EDuration="1.339828044s" podCreationTimestamp="2026-01-22 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:45:01.334784179 +0000 UTC m=+3875.179855289" watchObservedRunningTime="2026-01-22 12:45:01.339828044 +0000 UTC m=+3875.184899124" Jan 22 12:45:02 crc kubenswrapper[4874]: I0122 12:45:02.327781 4874 generic.go:334] "Generic (PLEG): container finished" podID="d729e7c6-4664-47cb-b4bd-9892e1cf5aec" containerID="d454170fcee92be58faf76410af524e6d12e151305a1d5537c188dd81c65eef4" exitCode=0 Jan 22 12:45:02 crc kubenswrapper[4874]: I0122 12:45:02.329478 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7mj86" podUID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerName="registry-server" containerID="cri-o://b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443" gracePeriod=2 Jan 22 12:45:02 crc kubenswrapper[4874]: I0122 12:45:02.328077 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" event={"ID":"d729e7c6-4664-47cb-b4bd-9892e1cf5aec","Type":"ContainerDied","Data":"d454170fcee92be58faf76410af524e6d12e151305a1d5537c188dd81c65eef4"} Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.159025 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.208267 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-utilities\") pod \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.208354 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-catalog-content\") pod \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.208388 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v87g\" (UniqueName: \"kubernetes.io/projected/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-kube-api-access-7v87g\") pod \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\" (UID: \"402518c1-cf9f-4e51-9e46-2c7d1527ce6d\") " Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.209252 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-utilities" (OuterVolumeSpecName: "utilities") pod "402518c1-cf9f-4e51-9e46-2c7d1527ce6d" (UID: "402518c1-cf9f-4e51-9e46-2c7d1527ce6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.222315 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-kube-api-access-7v87g" (OuterVolumeSpecName: "kube-api-access-7v87g") pod "402518c1-cf9f-4e51-9e46-2c7d1527ce6d" (UID: "402518c1-cf9f-4e51-9e46-2c7d1527ce6d"). InnerVolumeSpecName "kube-api-access-7v87g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.269494 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "402518c1-cf9f-4e51-9e46-2c7d1527ce6d" (UID: "402518c1-cf9f-4e51-9e46-2c7d1527ce6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.310024 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.310088 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.310104 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v87g\" (UniqueName: \"kubernetes.io/projected/402518c1-cf9f-4e51-9e46-2c7d1527ce6d-kube-api-access-7v87g\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.342256 4874 generic.go:334] "Generic (PLEG): container finished" podID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerID="b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443" exitCode=0 Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.342379 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7mj86" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.342471 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mj86" event={"ID":"402518c1-cf9f-4e51-9e46-2c7d1527ce6d","Type":"ContainerDied","Data":"b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443"} Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.342541 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7mj86" event={"ID":"402518c1-cf9f-4e51-9e46-2c7d1527ce6d","Type":"ContainerDied","Data":"8ae86083e447aa6ec881faf0e570a3a06ed57d31d32d7fb452cb02a25568d862"} Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.342564 4874 scope.go:117] "RemoveContainer" containerID="b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.367648 4874 scope.go:117] "RemoveContainer" containerID="ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.387268 4874 scope.go:117] "RemoveContainer" containerID="468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.404193 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7mj86"] Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.408559 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7mj86"] Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.522953 4874 scope.go:117] "RemoveContainer" containerID="b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443" Jan 22 12:45:03 crc kubenswrapper[4874]: E0122 12:45:03.523567 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443\": container with ID starting with b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443 not found: ID does not exist" containerID="b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.523649 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443"} err="failed to get container status \"b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443\": rpc error: code = NotFound desc = could not find container \"b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443\": container with ID starting with b79e90f22636cd0aed9aa68ea9ce648ca46b137121b7b51414479a1ad8658443 not found: ID does not exist" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.523701 4874 scope.go:117] "RemoveContainer" containerID="ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8" Jan 22 12:45:03 crc kubenswrapper[4874]: E0122 12:45:03.524128 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8\": container with ID starting with ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8 not found: ID does not exist" containerID="ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.524183 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8"} err="failed to get container status \"ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8\": rpc error: code = NotFound desc = could not find container \"ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8\": container with ID starting with ae2787145bd140f284b5eb542ae4e0ed93975463dadefb31e27e4813c69e93b8 not found: ID does not exist" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.524209 4874 scope.go:117] "RemoveContainer" containerID="468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19" Jan 22 12:45:03 crc kubenswrapper[4874]: E0122 12:45:03.524504 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19\": container with ID starting with 468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19 not found: ID does not exist" containerID="468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.524536 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19"} err="failed to get container status \"468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19\": rpc error: code = NotFound desc = could not find container \"468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19\": container with ID starting with 468a698adc17dc00dba716877ea0acfe03d366fef4507a99ac8e71a28acd0f19 not found: ID does not exist" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.681308 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.716547 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:45:03 crc kubenswrapper[4874]: E0122 12:45:03.716861 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.833911 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fql5q\" (UniqueName: \"kubernetes.io/projected/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-kube-api-access-fql5q\") pod \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.834041 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-config-volume\") pod \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.834116 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-secret-volume\") pod \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\" (UID: \"d729e7c6-4664-47cb-b4bd-9892e1cf5aec\") " Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.834788 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-config-volume" (OuterVolumeSpecName: "config-volume") pod "d729e7c6-4664-47cb-b4bd-9892e1cf5aec" (UID: "d729e7c6-4664-47cb-b4bd-9892e1cf5aec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.838192 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-kube-api-access-fql5q" (OuterVolumeSpecName: "kube-api-access-fql5q") pod "d729e7c6-4664-47cb-b4bd-9892e1cf5aec" (UID: "d729e7c6-4664-47cb-b4bd-9892e1cf5aec"). InnerVolumeSpecName "kube-api-access-fql5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.838745 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d729e7c6-4664-47cb-b4bd-9892e1cf5aec" (UID: "d729e7c6-4664-47cb-b4bd-9892e1cf5aec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.936556 4874 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.936613 4874 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[4874]: I0122 12:45:03.936631 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fql5q\" (UniqueName: \"kubernetes.io/projected/d729e7c6-4664-47cb-b4bd-9892e1cf5aec-kube-api-access-fql5q\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:04 crc kubenswrapper[4874]: I0122 12:45:04.354443 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" event={"ID":"d729e7c6-4664-47cb-b4bd-9892e1cf5aec","Type":"ContainerDied","Data":"909e7447c9e1e506cd07baba43a99e331ccc847b174d0ffcaffa649c2276d6a9"} Jan 22 12:45:04 crc kubenswrapper[4874]: I0122 12:45:04.354504 4874 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="909e7447c9e1e506cd07baba43a99e331ccc847b174d0ffcaffa649c2276d6a9" Jan 22 12:45:04 crc kubenswrapper[4874]: I0122 12:45:04.354513 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-kr6bl" Jan 22 12:45:04 crc kubenswrapper[4874]: I0122 12:45:04.430488 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2"] Jan 22 12:45:04 crc kubenswrapper[4874]: I0122 12:45:04.436876 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-8vrb2"] Jan 22 12:45:04 crc kubenswrapper[4874]: I0122 12:45:04.734063 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" path="/var/lib/kubelet/pods/402518c1-cf9f-4e51-9e46-2c7d1527ce6d/volumes" Jan 22 12:45:04 crc kubenswrapper[4874]: I0122 12:45:04.735155 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40805b8-a40b-4854-80b3-28916f4f8a43" path="/var/lib/kubelet/pods/b40805b8-a40b-4854-80b3-28916f4f8a43/volumes" Jan 22 12:45:14 crc kubenswrapper[4874]: I0122 12:45:14.716250 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:45:14 crc kubenswrapper[4874]: E0122 12:45:14.716744 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:45:25 crc kubenswrapper[4874]: I0122 12:45:25.716339 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:45:25 crc kubenswrapper[4874]: E0122 12:45:25.717100 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:45:34 crc kubenswrapper[4874]: I0122 12:45:34.055272 4874 scope.go:117] "RemoveContainer" containerID="8a25148d438b262cdaed2dfa6ed49fd79b1992854289d94b46eaf3e9b7e53219" Jan 22 12:45:36 crc kubenswrapper[4874]: I0122 12:45:36.723504 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:45:36 crc kubenswrapper[4874]: E0122 12:45:36.724091 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:45:47 crc kubenswrapper[4874]: I0122 12:45:47.715532 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:45:47 crc kubenswrapper[4874]: E0122 12:45:47.716172 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:46:02 crc kubenswrapper[4874]: I0122 12:46:02.715929 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:46:02 crc kubenswrapper[4874]: E0122 12:46:02.716711 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:46:17 crc kubenswrapper[4874]: I0122 12:46:17.715843 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:46:18 crc kubenswrapper[4874]: I0122 12:46:18.023066 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"53dc1afb13dd727722ac5bb9850ed1be0aeb705ab0c93f80550ac8600fdf68e8"} Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.475133 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c2lw8"] Jan 22 12:48:24 crc kubenswrapper[4874]: E0122 12:48:24.475889 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d729e7c6-4664-47cb-b4bd-9892e1cf5aec" containerName="collect-profiles" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.475971 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="d729e7c6-4664-47cb-b4bd-9892e1cf5aec" containerName="collect-profiles" Jan 22 12:48:24 crc kubenswrapper[4874]: E0122 12:48:24.475997 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerName="extract-content" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.476006 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerName="extract-content" Jan 22 12:48:24 crc kubenswrapper[4874]: E0122 12:48:24.476024 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerName="registry-server" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.476032 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerName="registry-server" Jan 22 12:48:24 crc kubenswrapper[4874]: E0122 12:48:24.476042 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerName="extract-utilities" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.476050 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerName="extract-utilities" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.476200 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="d729e7c6-4664-47cb-b4bd-9892e1cf5aec" containerName="collect-profiles" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.476222 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="402518c1-cf9f-4e51-9e46-2c7d1527ce6d" containerName="registry-server" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.477307 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.501274 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2lw8"] Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.575410 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz8jv\" (UniqueName: \"kubernetes.io/projected/df5f75a7-68cf-4efb-85d6-663e9e293c0d-kube-api-access-kz8jv\") pod \"community-operators-c2lw8\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.575508 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-utilities\") pod \"community-operators-c2lw8\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.575529 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-catalog-content\") pod \"community-operators-c2lw8\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.676899 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz8jv\" (UniqueName: \"kubernetes.io/projected/df5f75a7-68cf-4efb-85d6-663e9e293c0d-kube-api-access-kz8jv\") pod \"community-operators-c2lw8\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.677008 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-utilities\") pod \"community-operators-c2lw8\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.677031 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-catalog-content\") pod \"community-operators-c2lw8\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.677630 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-catalog-content\") pod \"community-operators-c2lw8\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.677647 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-utilities\") pod \"community-operators-c2lw8\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.710458 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz8jv\" (UniqueName: \"kubernetes.io/projected/df5f75a7-68cf-4efb-85d6-663e9e293c0d-kube-api-access-kz8jv\") pod \"community-operators-c2lw8\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:24 crc kubenswrapper[4874]: I0122 12:48:24.848776 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:25 crc kubenswrapper[4874]: I0122 12:48:25.154964 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2lw8"] Jan 22 12:48:26 crc kubenswrapper[4874]: I0122 12:48:26.065734 4874 generic.go:334] "Generic (PLEG): container finished" podID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerID="169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587" exitCode=0 Jan 22 12:48:26 crc kubenswrapper[4874]: I0122 12:48:26.065809 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lw8" event={"ID":"df5f75a7-68cf-4efb-85d6-663e9e293c0d","Type":"ContainerDied","Data":"169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587"} Jan 22 12:48:26 crc kubenswrapper[4874]: I0122 12:48:26.066170 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lw8" event={"ID":"df5f75a7-68cf-4efb-85d6-663e9e293c0d","Type":"ContainerStarted","Data":"cb79405ab1fef9062efed21bf67b07217b663478707cfe8835b6a9c305df5370"} Jan 22 12:48:27 crc kubenswrapper[4874]: I0122 12:48:27.096679 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lw8" event={"ID":"df5f75a7-68cf-4efb-85d6-663e9e293c0d","Type":"ContainerStarted","Data":"96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e"} Jan 22 12:48:28 crc kubenswrapper[4874]: I0122 12:48:28.106223 4874 generic.go:334] "Generic (PLEG): container finished" podID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerID="96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e" exitCode=0 Jan 22 12:48:28 crc kubenswrapper[4874]: I0122 12:48:28.106317 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lw8" event={"ID":"df5f75a7-68cf-4efb-85d6-663e9e293c0d","Type":"ContainerDied","Data":"96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e"} Jan 22 12:48:28 crc kubenswrapper[4874]: I0122 12:48:28.106680 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lw8" event={"ID":"df5f75a7-68cf-4efb-85d6-663e9e293c0d","Type":"ContainerStarted","Data":"7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b"} Jan 22 12:48:29 crc kubenswrapper[4874]: I0122 12:48:29.145885 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c2lw8" podStartSLOduration=3.462193242 podStartE2EDuration="5.145865566s" podCreationTimestamp="2026-01-22 12:48:24 +0000 UTC" firstStartedPulling="2026-01-22 12:48:26.067824691 +0000 UTC m=+4079.912895761" lastFinishedPulling="2026-01-22 12:48:27.751496995 +0000 UTC m=+4081.596568085" observedRunningTime="2026-01-22 12:48:29.139877931 +0000 UTC m=+4082.984949031" watchObservedRunningTime="2026-01-22 12:48:29.145865566 +0000 UTC m=+4082.990936656" Jan 22 12:48:34 crc kubenswrapper[4874]: I0122 12:48:34.849670 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:34 crc kubenswrapper[4874]: I0122 12:48:34.850182 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:34 crc kubenswrapper[4874]: I0122 12:48:34.914124 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:35 crc kubenswrapper[4874]: I0122 12:48:35.212498 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:35 crc kubenswrapper[4874]: I0122 12:48:35.275475 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2lw8"] Jan 22 12:48:37 crc kubenswrapper[4874]: I0122 12:48:37.182933 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c2lw8" podUID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerName="registry-server" containerID="cri-o://7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b" gracePeriod=2 Jan 22 12:48:38 crc kubenswrapper[4874]: I0122 12:48:38.737496 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:38 crc kubenswrapper[4874]: I0122 12:48:38.918914 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-catalog-content\") pod \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " Jan 22 12:48:38 crc kubenswrapper[4874]: I0122 12:48:38.919021 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz8jv\" (UniqueName: \"kubernetes.io/projected/df5f75a7-68cf-4efb-85d6-663e9e293c0d-kube-api-access-kz8jv\") pod \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " Jan 22 12:48:38 crc kubenswrapper[4874]: I0122 12:48:38.919060 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-utilities\") pod \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\" (UID: \"df5f75a7-68cf-4efb-85d6-663e9e293c0d\") " Jan 22 12:48:38 crc kubenswrapper[4874]: I0122 12:48:38.920284 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-utilities" (OuterVolumeSpecName: "utilities") pod "df5f75a7-68cf-4efb-85d6-663e9e293c0d" (UID: "df5f75a7-68cf-4efb-85d6-663e9e293c0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:48:38 crc kubenswrapper[4874]: I0122 12:48:38.930419 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5f75a7-68cf-4efb-85d6-663e9e293c0d-kube-api-access-kz8jv" (OuterVolumeSpecName: "kube-api-access-kz8jv") pod "df5f75a7-68cf-4efb-85d6-663e9e293c0d" (UID: "df5f75a7-68cf-4efb-85d6-663e9e293c0d"). InnerVolumeSpecName "kube-api-access-kz8jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:48:38 crc kubenswrapper[4874]: I0122 12:48:38.995754 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df5f75a7-68cf-4efb-85d6-663e9e293c0d" (UID: "df5f75a7-68cf-4efb-85d6-663e9e293c0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.021055 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz8jv\" (UniqueName: \"kubernetes.io/projected/df5f75a7-68cf-4efb-85d6-663e9e293c0d-kube-api-access-kz8jv\") on node \"crc\" DevicePath \"\"" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.021093 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.021104 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df5f75a7-68cf-4efb-85d6-663e9e293c0d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.205774 4874 generic.go:334] "Generic (PLEG): container finished" podID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerID="7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b" exitCode=0 Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.205824 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lw8" event={"ID":"df5f75a7-68cf-4efb-85d6-663e9e293c0d","Type":"ContainerDied","Data":"7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b"} Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.205853 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lw8" event={"ID":"df5f75a7-68cf-4efb-85d6-663e9e293c0d","Type":"ContainerDied","Data":"cb79405ab1fef9062efed21bf67b07217b663478707cfe8835b6a9c305df5370"} Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.205873 4874 scope.go:117] "RemoveContainer" containerID="7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.206013 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lw8" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.266037 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2lw8"] Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.273377 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c2lw8"] Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.276389 4874 scope.go:117] "RemoveContainer" containerID="96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.297369 4874 scope.go:117] "RemoveContainer" containerID="169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.325549 4874 scope.go:117] "RemoveContainer" containerID="7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b" Jan 22 12:48:39 crc kubenswrapper[4874]: E0122 12:48:39.326161 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b\": container with ID starting with 7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b not found: ID does not exist" containerID="7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.326223 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b"} err="failed to get container status \"7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b\": rpc error: code = NotFound desc = could not find container \"7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b\": container with ID starting with 7c4043ead839e73ba762859fea2607e4f3556e5ca613450f714358fcc2dabd7b not found: ID does not exist" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.326250 4874 scope.go:117] "RemoveContainer" containerID="96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e" Jan 22 12:48:39 crc kubenswrapper[4874]: E0122 12:48:39.326738 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e\": container with ID starting with 96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e not found: ID does not exist" containerID="96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.326772 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e"} err="failed to get container status \"96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e\": rpc error: code = NotFound desc = could not find container \"96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e\": container with ID starting with 96f19a91649c00d3c5d916c4724cdf97877eaf4c77feceb978fd362291601a3e not found: ID does not exist" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.326793 4874 scope.go:117] "RemoveContainer" containerID="169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587" Jan 22 12:48:39 crc kubenswrapper[4874]: E0122 12:48:39.327268 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587\": container with ID starting with 169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587 not found: ID does not exist" containerID="169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587" Jan 22 12:48:39 crc kubenswrapper[4874]: I0122 12:48:39.327293 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587"} err="failed to get container status \"169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587\": rpc error: code = NotFound desc = could not find container \"169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587\": container with ID starting with 169a02c51c136bfbf3fb31b59c7139a015d5b38327a437427f92cfbb15fd2587 not found: ID does not exist" Jan 22 12:48:40 crc kubenswrapper[4874]: I0122 12:48:40.729179 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" path="/var/lib/kubelet/pods/df5f75a7-68cf-4efb-85d6-663e9e293c0d/volumes" Jan 22 12:48:43 crc kubenswrapper[4874]: I0122 12:48:43.520885 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:48:43 crc kubenswrapper[4874]: I0122 12:48:43.521241 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:49:13 crc kubenswrapper[4874]: I0122 12:49:13.520811 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:49:13 crc kubenswrapper[4874]: I0122 12:49:13.521598 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:49:43 crc kubenswrapper[4874]: I0122 12:49:43.520566 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:49:43 crc kubenswrapper[4874]: I0122 12:49:43.521009 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:49:43 crc kubenswrapper[4874]: I0122 12:49:43.521052 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:49:43 crc kubenswrapper[4874]: I0122 12:49:43.521667 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53dc1afb13dd727722ac5bb9850ed1be0aeb705ab0c93f80550ac8600fdf68e8"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:49:43 crc kubenswrapper[4874]: I0122 12:49:43.521713 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://53dc1afb13dd727722ac5bb9850ed1be0aeb705ab0c93f80550ac8600fdf68e8" gracePeriod=600 Jan 22 12:49:43 crc kubenswrapper[4874]: I0122 12:49:43.810206 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="53dc1afb13dd727722ac5bb9850ed1be0aeb705ab0c93f80550ac8600fdf68e8" exitCode=0 Jan 22 12:49:43 crc kubenswrapper[4874]: I0122 12:49:43.810266 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"53dc1afb13dd727722ac5bb9850ed1be0aeb705ab0c93f80550ac8600fdf68e8"} Jan 22 12:49:43 crc kubenswrapper[4874]: I0122 12:49:43.810641 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerStarted","Data":"512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96"} Jan 22 12:49:43 crc kubenswrapper[4874]: I0122 12:49:43.810666 4874 scope.go:117] "RemoveContainer" containerID="f8b0481d5958c4e3126d530e9c4112219efd18dd96b27a8af7b05bee45a4994b" Jan 22 12:51:43 crc kubenswrapper[4874]: I0122 12:51:43.521117 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:51:43 crc kubenswrapper[4874]: I0122 12:51:43.521889 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:52:13 crc kubenswrapper[4874]: I0122 12:52:13.520888 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:52:13 crc kubenswrapper[4874]: I0122 12:52:13.521465 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.427162 4874 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8j5sk"] Jan 22 12:52:15 crc kubenswrapper[4874]: E0122 12:52:15.427706 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerName="extract-content" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.427735 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerName="extract-content" Jan 22 12:52:15 crc kubenswrapper[4874]: E0122 12:52:15.427775 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerName="extract-utilities" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.427792 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerName="extract-utilities" Jan 22 12:52:15 crc kubenswrapper[4874]: E0122 12:52:15.427822 4874 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerName="registry-server" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.427838 4874 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerName="registry-server" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.428070 4874 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5f75a7-68cf-4efb-85d6-663e9e293c0d" containerName="registry-server" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.429770 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.450255 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8j5sk"] Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.457547 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-utilities\") pod \"redhat-operators-8j5sk\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.457665 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-catalog-content\") pod \"redhat-operators-8j5sk\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.457758 4874 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m9ft\" (UniqueName: \"kubernetes.io/projected/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-kube-api-access-6m9ft\") pod \"redhat-operators-8j5sk\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.558790 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-utilities\") pod \"redhat-operators-8j5sk\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.558838 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-catalog-content\") pod \"redhat-operators-8j5sk\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.558873 4874 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m9ft\" (UniqueName: \"kubernetes.io/projected/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-kube-api-access-6m9ft\") pod \"redhat-operators-8j5sk\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.559470 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-utilities\") pod \"redhat-operators-8j5sk\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.559499 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-catalog-content\") pod \"redhat-operators-8j5sk\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.585011 4874 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m9ft\" (UniqueName: \"kubernetes.io/projected/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-kube-api-access-6m9ft\") pod \"redhat-operators-8j5sk\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.761965 4874 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:15 crc kubenswrapper[4874]: I0122 12:52:15.982167 4874 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8j5sk"] Jan 22 12:52:16 crc kubenswrapper[4874]: I0122 12:52:16.030341 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j5sk" event={"ID":"7aa4b82e-4df8-48e6-a5a7-7994665e26c7","Type":"ContainerStarted","Data":"07f74157b807b2a05eba4b8e26ed2460aa0f21a6ccc677b52bf5605d386ba910"} Jan 22 12:52:17 crc kubenswrapper[4874]: I0122 12:52:17.037877 4874 generic.go:334] "Generic (PLEG): container finished" podID="7aa4b82e-4df8-48e6-a5a7-7994665e26c7" containerID="eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6" exitCode=0 Jan 22 12:52:17 crc kubenswrapper[4874]: I0122 12:52:17.038133 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j5sk" event={"ID":"7aa4b82e-4df8-48e6-a5a7-7994665e26c7","Type":"ContainerDied","Data":"eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6"} Jan 22 12:52:17 crc kubenswrapper[4874]: I0122 12:52:17.040317 4874 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:52:19 crc kubenswrapper[4874]: I0122 12:52:19.057316 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j5sk" event={"ID":"7aa4b82e-4df8-48e6-a5a7-7994665e26c7","Type":"ContainerStarted","Data":"a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2"} Jan 22 12:52:22 crc kubenswrapper[4874]: I0122 12:52:22.079563 4874 generic.go:334] "Generic (PLEG): container finished" podID="7aa4b82e-4df8-48e6-a5a7-7994665e26c7" containerID="a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2" exitCode=0 Jan 22 12:52:22 crc kubenswrapper[4874]: I0122 12:52:22.079638 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j5sk" event={"ID":"7aa4b82e-4df8-48e6-a5a7-7994665e26c7","Type":"ContainerDied","Data":"a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2"} Jan 22 12:52:24 crc kubenswrapper[4874]: I0122 12:52:24.095416 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j5sk" event={"ID":"7aa4b82e-4df8-48e6-a5a7-7994665e26c7","Type":"ContainerStarted","Data":"b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147"} Jan 22 12:52:24 crc kubenswrapper[4874]: I0122 12:52:24.125117 4874 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8j5sk" podStartSLOduration=3.360829217 podStartE2EDuration="9.125103145s" podCreationTimestamp="2026-01-22 12:52:15 +0000 UTC" firstStartedPulling="2026-01-22 12:52:17.040038918 +0000 UTC m=+4310.885109988" lastFinishedPulling="2026-01-22 12:52:22.804312836 +0000 UTC m=+4316.649383916" observedRunningTime="2026-01-22 12:52:24.121032879 +0000 UTC m=+4317.966103949" watchObservedRunningTime="2026-01-22 12:52:24.125103145 +0000 UTC m=+4317.970174215" Jan 22 12:52:25 crc kubenswrapper[4874]: I0122 12:52:25.763507 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:25 crc kubenswrapper[4874]: I0122 12:52:25.763766 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:26 crc kubenswrapper[4874]: I0122 12:52:26.818351 4874 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8j5sk" podUID="7aa4b82e-4df8-48e6-a5a7-7994665e26c7" containerName="registry-server" probeResult="failure" output=< Jan 22 12:52:26 crc kubenswrapper[4874]: timeout: failed to connect service ":50051" within 1s Jan 22 12:52:26 crc kubenswrapper[4874]: > Jan 22 12:52:35 crc kubenswrapper[4874]: I0122 12:52:35.802259 4874 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:35 crc kubenswrapper[4874]: I0122 12:52:35.863282 4874 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:36 crc kubenswrapper[4874]: I0122 12:52:36.034136 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8j5sk"] Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.193354 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8j5sk" podUID="7aa4b82e-4df8-48e6-a5a7-7994665e26c7" containerName="registry-server" containerID="cri-o://b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147" gracePeriod=2 Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.582731 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.675915 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-utilities\") pod \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.675992 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m9ft\" (UniqueName: \"kubernetes.io/projected/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-kube-api-access-6m9ft\") pod \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.677047 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-utilities" (OuterVolumeSpecName: "utilities") pod "7aa4b82e-4df8-48e6-a5a7-7994665e26c7" (UID: "7aa4b82e-4df8-48e6-a5a7-7994665e26c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.677287 4874 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-catalog-content\") pod \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\" (UID: \"7aa4b82e-4df8-48e6-a5a7-7994665e26c7\") " Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.677679 4874 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.681462 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-kube-api-access-6m9ft" (OuterVolumeSpecName: "kube-api-access-6m9ft") pod "7aa4b82e-4df8-48e6-a5a7-7994665e26c7" (UID: "7aa4b82e-4df8-48e6-a5a7-7994665e26c7"). InnerVolumeSpecName "kube-api-access-6m9ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.778690 4874 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m9ft\" (UniqueName: \"kubernetes.io/projected/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-kube-api-access-6m9ft\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.802349 4874 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aa4b82e-4df8-48e6-a5a7-7994665e26c7" (UID: "7aa4b82e-4df8-48e6-a5a7-7994665e26c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 12:52:37 crc kubenswrapper[4874]: I0122 12:52:37.880518 4874 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa4b82e-4df8-48e6-a5a7-7994665e26c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.219706 4874 generic.go:334] "Generic (PLEG): container finished" podID="7aa4b82e-4df8-48e6-a5a7-7994665e26c7" containerID="b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147" exitCode=0 Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.219795 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j5sk" event={"ID":"7aa4b82e-4df8-48e6-a5a7-7994665e26c7","Type":"ContainerDied","Data":"b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147"} Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.219854 4874 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j5sk" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.221285 4874 scope.go:117] "RemoveContainer" containerID="b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.228456 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j5sk" event={"ID":"7aa4b82e-4df8-48e6-a5a7-7994665e26c7","Type":"ContainerDied","Data":"07f74157b807b2a05eba4b8e26ed2460aa0f21a6ccc677b52bf5605d386ba910"} Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.242545 4874 scope.go:117] "RemoveContainer" containerID="a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.262610 4874 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8j5sk"] Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.268140 4874 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8j5sk"] Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.287940 4874 scope.go:117] "RemoveContainer" containerID="eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.303492 4874 scope.go:117] "RemoveContainer" containerID="b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147" Jan 22 12:52:38 crc kubenswrapper[4874]: E0122 12:52:38.303952 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147\": container with ID starting with b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147 not found: ID does not exist" containerID="b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.303989 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147"} err="failed to get container status \"b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147\": rpc error: code = NotFound desc = could not find container \"b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147\": container with ID starting with b65033740bb3015260b8a13295eef496a81ccc449e06c0bd433357bda1460147 not found: ID does not exist" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.304018 4874 scope.go:117] "RemoveContainer" containerID="a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2" Jan 22 12:52:38 crc kubenswrapper[4874]: E0122 12:52:38.304265 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2\": container with ID starting with a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2 not found: ID does not exist" containerID="a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.304290 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2"} err="failed to get container status \"a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2\": rpc error: code = NotFound desc = could not find container \"a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2\": container with ID starting with a2187d719b1797b03f7c8469e92ae17b306634dd3fd22fabb35efaaa87135ae2 not found: ID does not exist" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.304308 4874 scope.go:117] "RemoveContainer" containerID="eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6" Jan 22 12:52:38 crc kubenswrapper[4874]: E0122 12:52:38.304578 4874 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6\": container with ID starting with eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6 not found: ID does not exist" containerID="eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.304602 4874 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6"} err="failed to get container status \"eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6\": rpc error: code = NotFound desc = could not find container \"eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6\": container with ID starting with eb2a19caecc5454a93f49fc4f9ab75a3fe53e6bf338ab8ae7b3522cca4fb0fc6 not found: ID does not exist" Jan 22 12:52:38 crc kubenswrapper[4874]: I0122 12:52:38.726120 4874 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa4b82e-4df8-48e6-a5a7-7994665e26c7" path="/var/lib/kubelet/pods/7aa4b82e-4df8-48e6-a5a7-7994665e26c7/volumes" Jan 22 12:52:43 crc kubenswrapper[4874]: I0122 12:52:43.520994 4874 patch_prober.go:28] interesting pod/machine-config-daemon-4prkg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:52:43 crc kubenswrapper[4874]: I0122 12:52:43.521696 4874 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:52:43 crc kubenswrapper[4874]: I0122 12:52:43.521753 4874 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" Jan 22 12:52:43 crc kubenswrapper[4874]: I0122 12:52:43.522381 4874 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96"} pod="openshift-machine-config-operator/machine-config-daemon-4prkg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:52:43 crc kubenswrapper[4874]: I0122 12:52:43.522445 4874 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerName="machine-config-daemon" containerID="cri-o://512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" gracePeriod=600 Jan 22 12:52:43 crc kubenswrapper[4874]: E0122 12:52:43.653192 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:52:44 crc kubenswrapper[4874]: I0122 12:52:44.274328 4874 generic.go:334] "Generic (PLEG): container finished" podID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" containerID="512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" exitCode=0 Jan 22 12:52:44 crc kubenswrapper[4874]: I0122 12:52:44.274737 4874 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" event={"ID":"7c9653f9-cd5b-4b7a-8056-80ae8235d039","Type":"ContainerDied","Data":"512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96"} Jan 22 12:52:44 crc kubenswrapper[4874]: I0122 12:52:44.274794 4874 scope.go:117] "RemoveContainer" containerID="53dc1afb13dd727722ac5bb9850ed1be0aeb705ab0c93f80550ac8600fdf68e8" Jan 22 12:52:44 crc kubenswrapper[4874]: I0122 12:52:44.275389 4874 scope.go:117] "RemoveContainer" containerID="512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" Jan 22 12:52:44 crc kubenswrapper[4874]: E0122 12:52:44.275697 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:52:58 crc kubenswrapper[4874]: I0122 12:52:58.716643 4874 scope.go:117] "RemoveContainer" containerID="512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" Jan 22 12:52:58 crc kubenswrapper[4874]: E0122 12:52:58.717231 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:53:09 crc kubenswrapper[4874]: I0122 12:53:09.716851 4874 scope.go:117] "RemoveContainer" containerID="512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" Jan 22 12:53:09 crc kubenswrapper[4874]: E0122 12:53:09.718059 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:53:23 crc kubenswrapper[4874]: I0122 12:53:23.716095 4874 scope.go:117] "RemoveContainer" containerID="512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" Jan 22 12:53:23 crc kubenswrapper[4874]: E0122 12:53:23.717064 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:53:34 crc kubenswrapper[4874]: I0122 12:53:34.718955 4874 scope.go:117] "RemoveContainer" containerID="512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" Jan 22 12:53:34 crc kubenswrapper[4874]: E0122 12:53:34.719827 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:53:48 crc kubenswrapper[4874]: I0122 12:53:48.716002 4874 scope.go:117] "RemoveContainer" containerID="512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" Jan 22 12:53:48 crc kubenswrapper[4874]: E0122 12:53:48.716796 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:53:59 crc kubenswrapper[4874]: I0122 12:53:59.716568 4874 scope.go:117] "RemoveContainer" containerID="512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" Jan 22 12:53:59 crc kubenswrapper[4874]: E0122 12:53:59.717311 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039" Jan 22 12:54:14 crc kubenswrapper[4874]: I0122 12:54:14.716876 4874 scope.go:117] "RemoveContainer" containerID="512f18641516777047c81cf22d0f34b66336e79fdf481778071e86dc745dae96" Jan 22 12:54:14 crc kubenswrapper[4874]: E0122 12:54:14.718126 4874 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4prkg_openshift-machine-config-operator(7c9653f9-cd5b-4b7a-8056-80ae8235d039)\"" pod="openshift-machine-config-operator/machine-config-daemon-4prkg" podUID="7c9653f9-cd5b-4b7a-8056-80ae8235d039"